astrologicrat

astrologicrat t1_j9xs9ff wrote

Don't put too much stock in Yudkowsky's hallucinations. He has no understanding of biology and no unique talent for predicting the outcome of AI development. Any time he talks about the subject, it's a Rube Goldberg machine of fantasies.

The reality is that once a computer hits AGI that's sufficiently more intelligent than humans, there is basically an uncountable number of ways it can end humanity. Yudkowsky likes to bloviate about a few hand-picked bizarre examples so that people remember him when they discuss AGI.

Guess it's working.

20