VioletCrow t1_j9smth5 wrote
Reply to comment by perspectiveiskey in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> , I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".
I mean, just look at the current anti-vaccine movement. You just described the original Andrew Wakefield paper about vaccines causing autism. We don't need AI for this to happen, just a very credulous and gullible press.
governingsalmon t1_j9svhv8 wrote
I agree that we don’t necessarily need AI for nefarious actors to spread scientific misinformation, but I do think AI introduces another tool or weapon that could used by the Andrew Wakefields of the future in a way that might pose unique dangers to public health and public trust in scientific institutions.
I’m not sure whether it was malevolence or incompetence that has mostly contributed to vaccine misinformation, but if one intentionally sought to produce fake but convincing scientific-seeming work, wouldn’t something like a generative language model allow them to do so at a massively higher scale with little knowledge of a specific field?
I’ve been wondering what would happen if someone flooded a set of journals with hundreds of AI-written manuscripts without any real underlying data. One could even have all the results support a given narrative. Journals might develop intelligent ways of counteracting this but it might pose a unique problem in the future.
perspectiveiskey t1_j9u1r9n wrote
AI reduces the "proof of work" cost of an Andrew Wakefield paper. This is significant.
There's a reason people don't dedicate long hours to writing completely bogus scientific papers which will result in literally no personal gain: it's because they want to live their lives and do things like have a BBQ on a nice summer day.
The work involved in sounding credible and legitimate is one of the few barriers holding the edifice of what we call Science standing. The other barrier is peer review...
Both of these barriers are under a serious threat by the ease of generation. AI is our infinite monkeys on infinite typewriters moment.
This is to say nothing of much more insidious and clever intrusions into our thought institutions.
Viewing a single comment thread. View all comments