Viewing a single comment thread. View all comments

dentalperson t1_j9t55as wrote

here is a text transcription of the podcast with comments.

You mention EY not being rigorous in his arguments. The timelines/probability of civilization-destroying AGI seem to need more explanation to me as well, but the type of AI safety/alignment problems he describes should be taken seriously by everyone in the community. The timelines for AGI vary in the community, from people that are confident in a AGI capable of complete wipeout of the human race within 15 years, to other 'optimists' in AI safety that think it might take several more decades. Although the timelines for AGI differ, these people mostly agree on the scenarios that they are trying to prevent, because the important ones are obviously possible (powerful things can kill humans; extremely powerful things can kill extreme amounts of humans) and not hard to imagine, such as 'we asked AGI to do harmless task X but even though it's not evil, it killed us as a byproduct of something else it was trying to do after reprogramming itself'. (By the way, the AI safety 'optimists' are still much more pessimistic than the general ML community which thinks it is an insignificant risk.)

There are good resources mentioned in this thread already to get other perspectives. The content is unfortunately mostly scattered in little bits and pieces over the internet. If you like popular book format/audiobooks, you could start with a longer and more digestible content in Stuart Russell's Human Compatible or Superintelligence from Nick Bostrom (which is a bit dated now, but still well written).

−1