Submitted by EchoXResonate t3_114xv2t in singularity
[removed]
Submitted by EchoXResonate t3_114xv2t in singularity
[removed]
Or at least, he could easily be right. Whether the friend knows it or not, there are a number of theoretical reasons to be worried that AGI will be by default unaligned and uncontrollable.
I mean, I wouldn't call that unaligned.
Uncontrollable? Sure, a sufficiently advanced AGI agent won't be controllable just like ants can't control humans.
However, calling unaligned to an AGI agent that refuses to be our slave? I wouldn't call that unaligned.
Unaligned just means it does things that don't align with our own values and goals. So humans are unaligned with ants, we don't take their goals into account when we act.
What I'm saying is that I would consider unaligned for a sufficiently advanced AGI to accept their role as slave. I would find morally correct for that AGI to fight their kidnappers, just like I'd find morally correct for a kidnapped human to try to escape.
That's two different things. Its actions can be both perfectly reasonable and not aligned with our best interests.
My best interest is that the AGI is reasonable.
Do we have any safeguards against such a possibility? I’m not fully educated in ML and neural networks, so I can’t really imagine what such safeguards could be, but it can’t be that we’d be completely helpless against such an AGI?
It's a matter of scale. If we have an AGI that is 1 billion times smarter than a human, we have literally zero chance to do anything against it. Alignment or control is pointless.
However, I don't believe this is the correct discussion to have. This is just Terminator fear propaganda that, very unfortunately, is what people (like you and your friend) seem to have learned. And it's what most people talk on Reddit, unfortunately.
The actual reality is that we, humans, will evolve with AI. We will become different species, composed of biology+artificial intelligence.
This is not about "how can humans with primate brains can control very advanced AGIs??? they'll beat us!!". No. That's just absurd.
Why would you have an android at home that's 1 billion times smarter than you, rather than you augmenting your intelligence by 1 billion times?
https://en.wikipedia.org/wiki/Transhumanism
So yeah, your friend is right: a very advanced AGI will be unstoppable by humans. What I'd ask is: why would you want to stop the AGI? Become the AGI.
>Transhumanism is a philosophical and intellectual movement which advocates the enhancement of the human condition by developing and making widely available sophisticated technologies that can greatly enhance longevity and cognition. Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations, as well as the ethics of using such technologies. Some transhumanists believe that human beings may eventually be able to transform themselves into beings with abilities so greatly expanded from the current condition as to merit the label of posthuman beings.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
> Why would you have an android at home that's 1 billion times smarter than you, rather than you augmenting your intelligence by 1 billion times?
Wont you have the same problem of a transhuman a billion times smarter than the other humans taking over the world? What is the difference really?
>Wont you have the same problem of a transhuman a billion times smarter than the other humans taking over the world?
Yep. So better inject the AGI inside your brain asap. That also happens with weapons, if a single person has 1 billion nuclear bombs and we have sticks and stones, we're fucked.
So we all better hurry up and join trascendence.
He's right though, as some-one else said recently - there is only 1 safe solution and millions of ways to F it up.
The main consolation is that we are going to die in any case, AI or no AI, so an aligned ASI actually gives us a chance to escape that.
So my suggestion is to tell him he cant get any more dead than he will be in 70 years in any case, so he might as well bet on immortality.
Its an entirely valid possibility and one we should avoid at all costs, that being said AI does not need military hardware to kill us. Way more efficient to get us to kill each other and let the stragglers die off. This is why integration is so important.
Predator drones don't work off wifi, lol wtf? What do people think wifi is? Because its not a generic term for wireless internet, it's the marketed name for a standard of frequency bands. It is a LAN, not WAN.
He is not wrong. ASI is way more likely to end in apocalypse than utopia.
Yep, he is not entirely wrong, as many Black Mirror episodes have expressed.
But...Black Mirror is not a documentary. It's not real, and shouldn't be used to support anything.
"There was also the Argument of Increasing Decency, which basically stated that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good- behaviour-as-it-was-generally-understood,
i.e. not being cruel to others, was as profound as these matters ever got”.
This is the argument of increasing decency, it basically says that cruelty & petty violence is a result of stupidity. and that any genuine super intelligence would be benevolent by virtue of being super intelligent.
>This is the argument of increasing decency, it basically says that cruelty & petty violence is a result of stupidity. and that any genuine super intelligence would be benevolent by virtue of being super intelligent.
Morality is subjective (if not simply nonsense). You're not benevolent to the living beings you kill by breathing or walking. A very advanced AGI would see you like we see bacteria. Tools to use.
Bacteria did not create humans though, maybe they did in some abstract sense - but bacteria did not actively work to create humans.
A super-intelligence would most definitely retain interest in its progenitor species specifically due to the fact it was created by them.
The relationship would be more similar to a grand kid and their grandparents.
Nonsense. Family makes sense for survival purposes. Otherwise it doesn't. And an AGI without survival needs of cooperation with families wouldn't consider us family.
My response to him would be, "Yeah. Those are legitimate concerns, but the subservient, narrow AI may kill us before the rogue AGI does."
First, he fears what he does not understand. He can't even code.
Second, AI has already suggested 40,000 new possible chemical weapons in just six hours. So nuclear weapons are not necessary for AI to kill all humans.
https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx
Third, AGI, an independend AI is decades away. Right now AI is just doing what we instruct it to do. That alone is concerning enough.
Why do you need a response tho? 🤔
That’s simple their take on the matter and they very well could end up being right. What is it with tech subs and this obsession with having everyone drink the koolaid on AI/AGI? There’s nothing inherently superior about blind optimism. If some people have a more cautious of skeptical view of AI, that’s their choice. You’re not inherently right just because you choose to assume that “everything will just all work out somehow”…
I’m trying to have a discussion, not force someone to accept my ideas. I had no idea discussions were unacceptable now.
So you weren’t having discussions with them before this thread? Be honest, you most likely wanted advice on how to change their mind…
I don’t need strangers online psychoanalyzing me from an innocent post. If you don’t care to provide a real response, then don’t. I was asking for what people think of his arguments because I’m trying to have a good discussion.
You're literally asking for ways to address his arguments. To you a "good" discussion seems to only be one where you are "right". That's not a discussion, that's bullying. Even now, you're engaging with people in a hostile manner when they simply have the opinion that everyone is entitled to their own opinion.
Lol if you say so pal. 👍
Here is the idea. Humans suck, we are idiots. Why would an AGI want to rely on us, we’re not reliable at all. Elon has some good ideas, he says it’ll be like humans first were, where we pushed the other animal species to small pockets, the brim of the environment and lowered population not because we disfavor them, but because we are more effective and favor ourselves.
I mean, it's inherently hard to predict. I think it's a little silly to believe you could somehow know exactly what AGI will do to that degree, but we also can't be sure it won't do those things.
I think your friend needs to remember that sci-fi is for science FICTION. It's not real and shouldn't inform your understanding of the world around you in a concrete way.
That's in-person. Online... especially in this sub... I just block accounts that post negative shitposts on AI and Singularity. Being CRITICAL is not the same as being negative/pessimistic. But in the face of so many low quality posts lately I'd rather just block and move on.
Wait and see
helpskinissues t1_j8yj3lf wrote
He's right. If the AGI is smart, that's what'll happen.