Comments

You must log in or register to comment.

helpskinissues t1_j8yj3lf wrote

He's right. If the AGI is smart, that's what'll happen.

14

jamesj t1_j8yqdn2 wrote

Or at least, he could easily be right. Whether the friend knows it or not, there are a number of theoretical reasons to be worried that AGI will be by default unaligned and uncontrollable.

4

helpskinissues t1_j8yqoxc wrote

I mean, I wouldn't call that unaligned.

Uncontrollable? Sure, a sufficiently advanced AGI agent won't be controllable just like ants can't control humans.

However, calling unaligned to an AGI agent that refuses to be our slave? I wouldn't call that unaligned.

2

jamesj t1_j8yrwah wrote

Unaligned just means it does things that don't align with our own values and goals. So humans are unaligned with ants, we don't take their goals into account when we act.

3

helpskinissues t1_j8ys3xl wrote

What I'm saying is that I would consider unaligned for a sufficiently advanced AGI to accept their role as slave. I would find morally correct for that AGI to fight their kidnappers, just like I'd find morally correct for a kidnapped human to try to escape.

1

Spire_Citron t1_j8zqsga wrote

That's two different things. Its actions can be both perfectly reasonable and not aligned with our best interests.

1

EchoXResonate OP t1_j8zikve wrote

Do we have any safeguards against such a possibility? I’m not fully educated in ML and neural networks, so I can’t really imagine what such safeguards could be, but it can’t be that we’d be completely helpless against such an AGI?

1

helpskinissues t1_j8zj8fw wrote

It's a matter of scale. If we have an AGI that is 1 billion times smarter than a human, we have literally zero chance to do anything against it. Alignment or control is pointless.

However, I don't believe this is the correct discussion to have. This is just Terminator fear propaganda that, very unfortunately, is what people (like you and your friend) seem to have learned. And it's what most people talk on Reddit, unfortunately.

The actual reality is that we, humans, will evolve with AI. We will become different species, composed of biology+artificial intelligence.

This is not about "how can humans with primate brains can control very advanced AGIs??? they'll beat us!!". No. That's just absurd.

We will be the AGIs. The very AGI you fear will be part of your brain, not against you.

Why would you have an android at home that's 1 billion times smarter than you, rather than you augmenting your intelligence by 1 billion times?

https://en.wikipedia.org/wiki/Transhumanism

So yeah, your friend is right: a very advanced AGI will be unstoppable by humans. What I'd ask is: why would you want to stop the AGI? Become the AGI.

4

WikiSummarizerBot t1_j8zja3w wrote

Transhumanism

>Transhumanism is a philosophical and intellectual movement which advocates the enhancement of the human condition by developing and making widely available sophisticated technologies that can greatly enhance longevity and cognition. Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations, as well as the ethics of using such technologies. Some transhumanists believe that human beings may eventually be able to transform themselves into beings with abilities so greatly expanded from the current condition as to merit the label of posthuman beings.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

Surur t1_j8zo74h wrote

> Why would you have an android at home that's 1 billion times smarter than you, rather than you augmenting your intelligence by 1 billion times?

Wont you have the same problem of a transhuman a billion times smarter than the other humans taking over the world? What is the difference really?

1

helpskinissues t1_j8zopek wrote

>Wont you have the same problem of a transhuman a billion times smarter than the other humans taking over the world?

Yep. So better inject the AGI inside your brain asap. That also happens with weapons, if a single person has 1 billion nuclear bombs and we have sticks and stones, we're fucked.

So we all better hurry up and join trascendence.

3

Surur t1_j8yo2ed wrote

He's right though, as some-one else said recently - there is only 1 safe solution and millions of ways to F it up.

The main consolation is that we are going to die in any case, AI or no AI, so an aligned ASI actually gives us a chance to escape that.

So my suggestion is to tell him he cant get any more dead than he will be in 70 years in any case, so he might as well bet on immortality.

6

Kule7 t1_j8yv5e5 wrote

Got it, immortality or bust. Just so long as we're staying grounded.

2

Surur t1_j8yxk0b wrote

Or 6 feet undergrounded.

3

Sandbar101 t1_j8yy3qo wrote

Its an entirely valid possibility and one we should avoid at all costs, that being said AI does not need military hardware to kill us. Way more efficient to get us to kill each other and let the stragglers die off. This is why integration is so important.

6

ActuatorMaterial2846 t1_j8yu7n8 wrote

Predator drones don't work off wifi, lol wtf? What do people think wifi is? Because its not a generic term for wireless internet, it's the marketed name for a standard of frequency bands. It is a LAN, not WAN.

4

submarine-observer t1_j8zsm0a wrote

He is not wrong. ASI is way more likely to end in apocalypse than utopia.

3

JenMacAllister t1_j8ypbxb wrote

Yep, he is not entirely wrong, as many Black Mirror episodes have expressed.

2

NanditoPapa t1_j941ih6 wrote

But...Black Mirror is not a documentary. It's not real, and shouldn't be used to support anything.

0

Wroisu t1_j8yredb wrote

"There was also the Argument of Increasing Decency, which basically stated that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good- behaviour-as-it-was-generally-understood,

i.e. not being cruel to others, was as profound as these matters ever got”.

This is the argument of increasing decency, it basically says that cruelty & petty violence is a result of stupidity. and that any genuine super intelligence would be benevolent by virtue of being super intelligent.

2

helpskinissues t1_j8zjjy3 wrote

>This is the argument of increasing decency, it basically says that cruelty & petty violence is a result of stupidity. and that any genuine super intelligence would be benevolent by virtue of being super intelligent.

Morality is subjective (if not simply nonsense). You're not benevolent to the living beings you kill by breathing or walking. A very advanced AGI would see you like we see bacteria. Tools to use.

1

Wroisu t1_j8zvra4 wrote

Bacteria did not create humans though, maybe they did in some abstract sense - but bacteria did not actively work to create humans.

A super-intelligence would most definitely retain interest in its progenitor species specifically due to the fact it was created by them.

The relationship would be more similar to a grand kid and their grandparents.

1

helpskinissues t1_j904i5k wrote

Nonsense. Family makes sense for survival purposes. Otherwise it doesn't. And an AGI without survival needs of cooperation with families wouldn't consider us family.

1

World_May_Wobble t1_j8zam87 wrote

My response to him would be, "Yeah. Those are legitimate concerns, but the subservient, narrow AI may kill us before the rogue AGI does."

2

just-a-dreamer- t1_j8yjp35 wrote

First, he fears what he does not understand. He can't even code.

Second, AI has already suggested 40,000 new possible chemical weapons in just six hours. So nuclear weapons are not necessary for AI to kill all humans.

https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx

Third, AGI, an independend AI is decades away. Right now AI is just doing what we instruct it to do. That alone is concerning enough.  

1

BigZaddyZ3 t1_j8yk9fq wrote

Why do you need a response tho? 🤔

That’s simple their take on the matter and they very well could end up being right. What is it with tech subs and this obsession with having everyone drink the koolaid on AI/AGI? There’s nothing inherently superior about blind optimism. If some people have a more cautious of skeptical view of AI, that’s their choice. You’re not inherently right just because you choose to assume that “everything will just all work out somehow”…

1

EchoXResonate OP t1_j8ynwo8 wrote

I’m trying to have a discussion, not force someone to accept my ideas. I had no idea discussions were unacceptable now.

6

BigZaddyZ3 t1_j8yo6jd wrote

So you weren’t having discussions with them before this thread? Be honest, you most likely wanted advice on how to change their mind…

−5

EchoXResonate OP t1_j8yolfw wrote

I don’t need strangers online psychoanalyzing me from an innocent post. If you don’t care to provide a real response, then don’t. I was asking for what people think of his arguments because I’m trying to have a good discussion.

3

NanditoPapa t1_j941a3x wrote

You're literally asking for ways to address his arguments. To you a "good" discussion seems to only be one where you are "right". That's not a discussion, that's bullying. Even now, you're engaging with people in a hostile manner when they simply have the opinion that everyone is entitled to their own opinion.

−1

Classic_Swim5572 t1_j8ze8v1 wrote

Here is the idea. Humans suck, we are idiots. Why would an AGI want to rely on us, we’re not reliable at all. Elon has some good ideas, he says it’ll be like humans first were, where we pushed the other animal species to small pockets, the brim of the environment and lowered population not because we disfavor them, but because we are more effective and favor ourselves.

1

Spire_Citron t1_j8zqh4u wrote

I mean, it's inherently hard to predict. I think it's a little silly to believe you could somehow know exactly what AGI will do to that degree, but we also can't be sure it won't do those things.

0

NanditoPapa t1_j8zcne0 wrote

I think your friend needs to remember that sci-fi is for science FICTION. It's not real and shouldn't inform your understanding of the world around you in a concrete way.

That's in-person. Online... especially in this sub... I just block accounts that post negative shitposts on AI and Singularity. Being CRITICAL is not the same as being negative/pessimistic. But in the face of so many low quality posts lately I'd rather just block and move on.

−1