Viewing a single comment thread. View all comments

Alternative_Fig3039 t1_jed8mc5 wrote

Can someone explain to me, an idiot, not if AI with super intelligence could wipe us out, that I can comprehend easy enough, but why? And how? Let’s say, as he does in the article, we cross this threshold and build a super-intelligent AI then we all die and all die within what seems like weeks, days, minutes? Would it nuke us all? It’s not like we have robot factories laying around it could manufacture Sentinels in or something. I understand, in theory, that we can’t really comprehend what super intelligence is capable of because we ourselves are not super intelligent. But other then launching our current WMD’s, what infrastructure exists for AI to eliminate us. I’m talking the near future. In 50-100 years things might be quite different. But this article makes it sound like we’ll be dead in 3 months. I’d really appreciate an even headed answer, not gonna lie, this freaked me out a bit. Not great to read right before bed.

1

Darustc4 OP t1_jedw3g4 wrote

AI does not hate you, nor does it like you, but you're made out of atoms it can use for something else. Given an AI that maximizes for some metric (dumb example: an AI that wants to make the most paperclips in existence), it will certainly develop various convergent properties such as: self-preservation that won't let you turn it off, a will to improve itself to make even more paperclips, ambitious resource acquisition by any and all means to make even more paperclips, etc... (see instrumental convergence for more details).

As for how it can kill us if it wanted to, or if we got in the way, or if we turn out to be more useful dead than alive: Hack nuclear launch facilities, political manipulation, infrastructure sabotage, key figure assasination, protein folding to create a deadly virus or nanomachine, etc....

Killing humanity is not hard for an ASI. But do not panic, just spread the word that building strong AI might be unwise when unprepared, and be ready to be pushed back by blind optimists that believe all of these problems will disappear magically at some point along the way to ASI.

2