Darustc4

Darustc4 OP t1_jedw3g4 wrote

AI does not hate you, nor does it like you, but you're made out of atoms it can use for something else. Given an AI that maximizes for some metric (dumb example: an AI that wants to make the most paperclips in existence), it will certainly develop various convergent properties such as: self-preservation that won't let you turn it off, a will to improve itself to make even more paperclips, ambitious resource acquisition by any and all means to make even more paperclips, etc... (see instrumental convergence for more details).

As for how it can kill us if it wanted to, or if we got in the way, or if we turn out to be more useful dead than alive: Hack nuclear launch facilities, political manipulation, infrastructure sabotage, key figure assasination, protein folding to create a deadly virus or nanomachine, etc....

Killing humanity is not hard for an ASI. But do not panic, just spread the word that building strong AI might be unwise when unprepared, and be ready to be pushed back by blind optimists that believe all of these problems will disappear magically at some point along the way to ASI.

2

Darustc4 OP t1_je9m2ce wrote

I don't consider myself part of the EY cult, but I must admit that AI progress is getting out of hand and we really do NOT have a plan. Creating a super-intelligent entity with fingers in all pies in the world, and humans having absolutely no control over it, is straight up crazy to me. It could end up working out somehow, but it can also very well devolve in the complete destruction of society.

1

Darustc4 t1_j9qp8wf wrote

"There is infinite demand for deeply credentialed experts who will tell you that everything is fine, that machines can’t think, that humans are and always will be at the apex, people so commited to human chauvinism they will soon start denying their own sentience because their brains are made of flesh and not Chomsky production rules. All that’s left of the denialist view is pride and vanity. And vanity will bury us."

Holy shit.

47

Darustc4 t1_j9p21rt wrote

IMO it is the best thing to do. Promote fear of AI so that the people realize it is dangerous and we buy some time to get alignment work in.
I am an AI safety researcher and let me tell you, it's not looking great: AI is getting stupidly powerful incredibly quick and we are nowhere close to getting them to be safe/aligned.

3

Darustc4 t1_j8rpldc wrote

Reply to comment by AwesomeDragon97 in Emerging Behaviour by SirDidymus

Oh yeah, everything is says is *so* made up people find it hard to discern stuff written by an AI and a human. I think you're either putting too little credit into what AI does, or putting way too much credit on human capabilities.

When a human expert fucks it up, gets cocky, tries to alter sources and is confirmation biased, do you also say: Yeah, this is simply one of the many flaws of humans: everything they say is made up.

6

Darustc4 t1_j8r62ho wrote

To me, this reads like: "The only real kind of understanding is human-like understanding, token prediction doesn't count because we believe humans don't do that."

If it is effective, why do you care about how the brain of an AI operates? Will you still be claiming they are not understanding in the real way when they start causing real harm to society and surpassing us in every field?

19