Viewing a single comment thread. View all comments

astrologicrat t1_j9xs9ff wrote

Don't put too much stock in Yudkowsky's hallucinations. He has no understanding of biology and no unique talent for predicting the outcome of AI development. Any time he talks about the subject, it's a Rube Goldberg machine of fantasies.

The reality is that once a computer hits AGI that's sufficiently more intelligent than humans, there is basically an uncountable number of ways it can end humanity. Yudkowsky likes to bloviate about a few hand-picked bizarre examples so that people remember him when they discuss AGI.

Guess it's working.

20

ActuatorMaterial2846 t1_j9yauge wrote

Yeah, I think people took that comment about 'instantly killing us by releasing a poison in the atmosphere' a bit too seriously. Maybe because it was so specific, idk.

But he does have a point that we should be concerned about an autonomous entity smarter than humans in all cognitive ability. An entity that has no known desire apart from a core function to improve and adapt to its environment.

Such an entity would most certainly begin competing with us for resources. So, his emphasis on alignment is correct, and he is probably not overstating the difficulty in achieving that.

Everything else he says is a bit too doomer with little to back it up.

5