dentalperson
dentalperson t1_j9t55as wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
here is a text transcription of the podcast with comments.
You mention EY not being rigorous in his arguments. The timelines/probability of civilization-destroying AGI seem to need more explanation to me as well, but the type of AI safety/alignment problems he describes should be taken seriously by everyone in the community. The timelines for AGI vary in the community, from people that are confident in a AGI capable of complete wipeout of the human race within 15 years, to other 'optimists' in AI safety that think it might take several more decades. Although the timelines for AGI differ, these people mostly agree on the scenarios that they are trying to prevent, because the important ones are obviously possible (powerful things can kill humans; extremely powerful things can kill extreme amounts of humans) and not hard to imagine, such as 'we asked AGI to do harmless task X but even though it's not evil, it killed us as a byproduct of something else it was trying to do after reprogramming itself'. (By the way, the AI safety 'optimists' are still much more pessimistic than the general ML community which thinks it is an insignificant risk.)
There are good resources mentioned in this thread already to get other perspectives. The content is unfortunately mostly scattered in little bits and pieces over the internet. If you like popular book format/audiobooks, you could start with a longer and more digestible content in Stuart Russell's Human Compatible or Superintelligence from Nick Bostrom (which is a bit dated now, but still well written).
dentalperson t1_j9t6zxx wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> can also create highly dangerous bioweapons
EY's example he gave in the podcast was a bioweapon attack. Unsure what kind of goal the AI had in this case, but maybe that was the point:
>But if it's better at you than everything, it's better at you than building AIs. That's snowballs. It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second. That's the disaster scenario if it's as smart as I am. If it's smarter, it might think of a better way to do things. But it can at least think of that if it's relatively efficient compared to humanity because I'm in humanity and I thought of it.