Submitted by tmblweeds t3_zn0juq in MachineLearning
alekosbiofilos t1_j0f0uuq wrote
Great tech skills. But honestly, I think it is a bad idea!
If it works most of the time, that's even worst! Thing is, these models are basically fancy autocorrect apps. They don't understand anything. Research papers are fairly structured, but not as much as needed for this application. For example, this app might be very compelled to end with "but more research is needed", or start "discussing" with itself on scientific ideas that can be studied from several angles. Not to mention things like gene names, gene x environment interactions, the nuance of what "interaction" is (is it genetic, regulatory, physical?).
Maybe for researchers, this can work as a way of providing an easier to use search engine for papers. The problem is that the "curation" of the answer is abstracted away from users, and one might take more time trying to figure out what the thing meant, than doing the lit search
tmblweeds OP t1_j0hhgpd wrote
I hear you! I definitely want to "do no harm" here—I think while I'm still testing things out I need to plaster a lot more warnings around the site like "THIS IS A PROOF-OF-CONCEPT, NOT MEDICAL ADVICE, DO NOT TRUST."
My ultimate goal would be to make the "curation" of the answer much clearer, so that this would be more of a research tool (like Pubmed) and less of a magic oracle.
Viewing a single comment thread. View all comments