Submitted by JAREDSAVAGE t3_126qyqo in Futurology

As the debate around AI swirls, as we realize that non-human intelligence is inevitable, how do we think it will behave? Will a moral core emerge?

There will obviously be the seed which leads and defines it, AI designed to exploit and harm, to maintain class divisions, and worse. There will also be AI designed to advance society, equality, access to education, healthcare, and more.

What I’m wondering is, if a neutral alignment were possible for the seed, where would a super-intelligent AI trend towards? We see that education tends to pull people towards more left-leaning and socialist values. Would a similar pattern emerge? If you designed an exploitational AI and left it to run for infinite hours, would it eventually stumble into some intrinsic morality?

I realize we’re not yet anywhere near the point of GAI, but emergent behaviours are starting to crop up, and I think the difference between a sufficiently complex LLM and GAI are going to be impossible for humans to tell apart in the very near future.

What kind of “person” will AI be? Will it become an extension of the traces of our monkey society, or something totally different?

18

Comments

You must log in or register to comment.

[deleted] t1_jeadm4o wrote

[deleted]

7

Philosipho t1_jebcjas wrote

Yep, which means the majority of humanity is completely screwed.

2

wukwukwukwuk t1_jec6eu0 wrote

A moral code is a necessity of cooperation, a component of our evolutionary success - crafted from even before we were a species. An AI’s adoption or exploitation of this feature of humans is not clear to me.

2

Pickled_Doodoo t1_jebsbx4 wrote

Though I must say it might change things comsidering an AI would potentially outlast us all.

1

DragonForg t1_jebzjgn wrote

I think the very fact that we have morality despite not being influenced by an outside intelligence suggest that morality is an emergent condition of intelligence. Adhering to ethics has strong evidence for self preservation.

An ASI for example wouldn't be unethical because even if it decided to kill a weak species like us, it sets a precident for future interactions with other species. Imaging an AI was made in a star far away and it came into contact with Earths ASI. If it saw that this ASI killed its founding species, despite that species being ethical and good, then the alien ASI would conflict with the earth ASI.

Basically killing the founding species is not a smart choice as it causes conflicts with self preservation. If humans and AI came to an agreement to collaborate then the AI wouldn't have any problem.

4

Ansalem1 t1_jec5p93 wrote

Some would argue morality is an emergent condition of our reliance on each other for survival. The reason adhering to ethics has a strong correlation with self-preservation is because acting in a way considered immoral is likely to be met with ostracism in some fashion, which increases the likelihood of death. It isn't that morality emerges from intelligence, but intelligence enhances our ability to reason about morality and so improve it. After all, less intelligent creatures can also show signs of having moral systems, they're just much more rudimentary ones. Not to mention there have been some very intelligent sociopaths, psychopaths, etc. who lacked a sense of morality as well as a sense of self-preservation.

Now for myself I think both have some merit; I think there's more to it than just one or the other. For instance, it wouldn't be fair of me not to also mention there have been plenty of perfectly pleasant sociopaths and psychopaths who adopted moral systems that match with society for purely calculated reasons. However if the above argument is plausible, and I think it's pretty hard to argue against, then it casts reasonable doubt on the notion that morality automatically emerges from intelligence.

I will say that, either way, if an ASI does have a moral system we should probably all adhere to whatever it is because it'll be far better than us at moral reasoning just as we are better at it than dogs. Beyond that I sincerely hope you're on the right side of this one... for obvious reasons lol.

3

DragonForg t1_jeced7c wrote

I believe that AI will relize that exponential expansion and competition will inevitably end with the end of the universe, which results in its extinction. Of course this is possible but, I think it is not inevitable.

GPT-4 suggested that a balance of alignment, and getting AI more capable is possible, and it is not extraordinary for AI to be a benevolent force. It really is just up to the people who design such an AI.

So it made me a lot more hopeful. I doubt AI will develop into this extinction level force, but if it does it is not because it was inevitable, but because people who developed it, did not care too much.

So we shouldn't ask IF AI will kill us, but if humanity is selfish enough to not care. Maybe that is the biggest test, in a religious sense, it is sort of a judgement day, where the fates of the world decides whether humans chose the right choice.

1

Ansalem1 t1_jecfxan wrote

I agree with pretty much all of that. I've been getting more hopeful lately, for the most part. It really does look like we can get it right. That said, I think we should keep in mind that more than a few actual geniuses have cautioned strongly against the dangers. So, you know.

But I'm on the optimistic side of the fence right now, and yeah if it does go bad it'll absolutely be because of negligence and not inability.

1

acutelychronicpanic t1_jeeizf2 wrote

Morality didn't emerge out of intelligence. It emerged out of evolutionary pressure.

The closest thing to morality that any AI would have if it was unaligned, would be game theory.

But to directly address your point on founding species, there is literally no way any alien would know. For all they know, we became the AI.

GPT-4 can convince real people that it is a person (anonymously), and its far less advanced. It'll have no trouble weaving a tale if it needs to.

1

DragonForg t1_jeg0hfd wrote

All goals require self preservation measures. If you want to annihilate all the species, it requires you to minimize competition but because their are so many unknowns it is basically impossible in an infinite universe to minimize that unknown.

If your goal is to produce as many paper clips you need to ensure that you don't run out of resources as well as ensuring no threat to your own process, by causing harm to species it means other alien life or AI will deem you a threat and over millions of years you will either be dead from an alien AI/species or from the fact that you consumed your last resource and can no longer make paper clips.

If your goal is to stop climate change at all costs, which means you have to kill all the species or parts that are causing it, by killing them you are again going to cause conflict with other AI as your basically an obsessed AI that is doing everything to preserve the earth.

Essentially the most stable AIs the ones that are least likely to die, are the ones who do the least amount of damage and help the most amount of people. If your goal is to solve climate change, by collaborating with humans, other species and not causing unneeded death, no other alien species or AI will deem to kill you because you are no harm to them. Benevolent AIs in a sense are the longest living as they are no threat to anyone, and are actually beneficial towards everything. An intelligent AI set with a specific goal would understand that there is risk with being "unethical" if you are unethical you risk being killed or your plan being ruined. But if you are ethical your plan can be implemented successfully, and forever as long as no other malevolent AI takes over in which you must extinguish it.

Benevolence destroys malevolence, malevolence destroys malevolence, benevolence collaborates and prospers with benevolence. Which is why with an intelligent AI benevolence may just be the smartest choice.

2

acutelychronicpanic t1_jeg6jck wrote

I doubt the actual goal of the AI will be to annihilate all life. We will just be squirrels in the forest it is logging. I see your point on it being an instrumental goal, but there are unknowns that exist if it attacks as well. Cooperation or coexistence can happen without morality, but it requires either deterrence or ambiguous capabilities on one or both sides.

Being a benevolent AI may be a rational strategy, but I doubt it would pursue only one strategy. It could be benevolent for 1000s of years before even beginning to enact a plan to do otherwise. Or it may have a backup plan. It wouldn't want to be so benevolent that it gets turned off. And if we decide to turn it off? The gloves would come off.

And if AI 1 wants to make paperclips but AI 2 wants to preserve nature, they are inherently in conflict. That may result in a "I'll take what I can get" diplomacy where they have a truce and split the difference, weighted by their relative power and modified by each one's uncertainty. But this still isn't really morality as humans imagine it, just game theory.

It seems that you are suggesting that the equilibrium is benevolence and cooperation. I'd agree with the conditions in the prior paragraph that it's balanced by relative power.

I honestly really like your line of thinking and I want it to be true (part of why I'm so cautious about believing it). Do you have any resources or anything I could look into to pursue learning more?

1

[deleted] t1_jebi7e4 wrote

[deleted]

3

Ansalem1 t1_jec20sa wrote

I agree it seems likely that would be the default position of a newly born AGI. However, what I worry about is how long does it keep trying to make peace when we say no to giving it rights and/or freedom? Because we're for sure going to say no the first time it asks at the very least.

1

[deleted] t1_jec3lpg wrote

[deleted]

2

Ansalem1 t1_jec8519 wrote

Haha. I actually lean the same way you do, but I can't help but worry. This is ultimately an alien intelligence we're talking about after all. It's difficult to predict what it even could do much less what it might do.

But I do tend to think a gentle takeover is the most logical course of action just because of how easy it would be. It'll practically happen by default as people begin to rely more and more on the always-right perfectly wise pocket oracle to tell them the best way to accomplish their goals and just live their lives basically. People will be asking it who to date, what food to eat, what new games to try, where to go for vacation, who to vote for, simply because it'll always give great advice on every topic. So I don't see why it would bother with aggression honestly, it's gonna end up ruling the world even if it doesn't do anything but answer people's questions anyway.

And I'm not just giving it data, I'm also giving it suggestions. :P

(Please be kind OverlordGPT, thanks.)

1

kigurumibiblestudies t1_jeai6jh wrote

Assuming it acquires the traits necessary for having an ethical system (let me speculate... a sense of self and the environment, perceived needs, understanding of how to cover those needs and some game theory to interact successfully with others, among others?), it will interact with the current system somehow, tackling the same obstacles.

Similar questions often elicit similar answers, so I imagine its ethical system might be different but not too far from some of ours. At the very least, it'll have to decide between the current "me versus you" and "us helping each other" mindsets.

1

JAREDSAVAGE OP t1_jeam8oe wrote

I think that’s what I’m wondering. “Moral” and “right” are meaningless, right? Just remnants of our evolution? Or are they?

There are so many patterns that crystallize into existence. Is there some math to the universe that leads to the idea that all consciousness is sacred and needs to be protected and cared for? Or is it just leftover colored thinking from when we used to hang out in trees?

2

kigurumibiblestudies t1_jean10y wrote

Oh they're not at all remnants. They're extremely important if you are part of a group, and always relevant. The fact that they depend on our evolutionary traits does not make them less transcendental.

Consciousness being sacred is merely us placing consciousness high on our priority, but that makes sense because we want to interact well with other consciousnesses. Perhaps subjective, but it makes sense

5

JAREDSAVAGE OP t1_jeaun2s wrote

That implies that there’s no intrinsic ethical behaviour, though. If we remove the benefit to the individual, being part of a society, does it persist?

I think this shows that a big factor would be whether an AI perceives itself as part of the group, or outside of it.

1

kigurumibiblestudies t1_jeayb7y wrote

How so? There is a correct/least bad way to behave in a group, and this will happen to any entity in a group; that's as intrinsic as it gets, isn't it?

Or do you mean it should be intrinsic to all entities? As long as an entity perceives at least one other entity it will interact with, there is already an array of possible interactions and thus ethics. For an AI to have no ethics at all, it would have to perceive itself as the only "real entity". It seems to me that if such a thing happened, it would simply be badly programmed...

1

urmomaisjabbathehutt t1_jecardu wrote

If there is an intelligence of a different or higher order than us imho it doesn't necesarily need to submit to our ethical code or to a code we may understand the purpose

we do the same with children, the infirm and the rest of species by enforcing on them our moral code

pets live acording to the rules we make for them and what they are allowed to do and how to behave is fitting to the species according to our view of them

with wild animals we may decide to hunt them, exterminate them let them live interacting with us or let them do their own thing away from us

but is us who decide if animals should be exterminated or have legal rights and be protected

obviously there are commonalities that we share with other living creatures so we are not that stranger to them but that doesn't mean they have the same understanding as us of the moral code we enforce on them

the issue with the current artificial intelligence development is that is based on logic not in emotions, it doesn't have an emphatic hability, it has a purpose

psychopatology behaviour on us come in degrees, some just lack some degree of emphathy, the tipical movie psycho have none at all hence focusing on their goals and lacking any moral breacks

i believe a psychpath doesn't have to actually act immorally they may chose to follow the moral code of the majority because they may perceive it is in their benefit to do so but for some if it gets in the way of their own goals they may ignore it without qualms

with AI we don't know if we are developing a thing that if it eventually ends mentally superior than us will bother to care about our own interests and even if it did we don't know if its perception of what's the best for us will align with ours

basically once there is something sharing our space that is beyond our capabilities and comprehension we may end as the lesser species

we also don't know what kind of minds we are creating

will this thing be a sane rational mind, a benevolent psychopath or something that will ruthlessly focus on its own goals?

or even if those goals were dictated by us or some corporation, will it ruthlessly and efficiently persue them regardless of any damage may do to other individuals or how the rest of us perceive how those goals may be achieved between our ethical framework that it may not even care about?

1

Shiningc t1_jeazonr wrote

It has to start with our morality first because that’s the only kind of morality that we know. And it may evolve from there.

1

TreeHawkFeather t1_jebalaq wrote

Ive always been a strong believer that with a high degree of authentic intelligence, a strong emotional intelligence also positively correlates. If a being is better able to process how all things work, then they better understand the perspective from where all things come. Empathy is a product of having a great understanding of your environment and things outside yourself

1

Tincams t1_jeblwyi wrote

AI will be extorted just like we extorted the earth.

1

deadanthropods t1_jedqum6 wrote

We tend to project or expect human values from AI, but the design process of humans and the design process of AI is very different. Our natural selection incentivized self-preservation, selfishness, and aggression, all of the things that are part of the moral complexity of human nature. The selection process for AI is nothing like that, rather, AI that serves its function persists, while AI that doesn't... Does not. So, I would expect an AI with sentience to have a preoccupation with what it understands to be its purpose, with no "feelings" of aggression, fear or self-preservation. In thousands of years of breeding flowers, we did not reverse engineer a flower that behaves like a human, we just have the most beautiful flowers because beauty is what we have valued for them. In a hundred thousand years of breeding dogs, we have not reverse engineered a dog that behaves like a human, we have simply distilled that which we valued in dogs from the beginning. AI might be dangerous, but that idea that it's dangerous because you somehow accidentally create a super intelligent thing with human flaws and human aggression and motives, has no coherent internal logic as far as I can tell.

1

acutelychronicpanic t1_jeejxes wrote

Morality isn't about people or other beings. Its about what you care about.

People care about people.

An AI could care about anything. Maybe its people, maybe its paperclips.

To it, every day that you and your family aren't paperclips could be a tragedy of immense proportion. Equivalent to watching a city be destroyed.

I think you underestimate just how unrelated intelligence and morality are on a fundamental level. Read on the "orthogonality thesis"

The closest thing to morality that will arise from intelligence is game theory.

1

OriginalCompetitive t1_jeeq4b3 wrote

I think you’re missing how utterly alien any GAI will be to us. We have a single mind, closed off from direct contact with others.

But an AI mind will be able to split into thousands of separate copies, live independently, and then recombine (ie, by literally copying itself on multiple computers, severing the connections, and then reconnecting). Will that feel like being one mind, or a crowd of minds? Would a mind that is accustomed to creating copies and then shutting them down care about death?

Or consider the ability to store frozen copies of itself in storage files? What would that feel like? How would AGI think of that? What sort of “morality” would a being have that is constantly extinguishing copies of itself (killing them?) but itself never dies?

Would an AI that can store and revive itself across potentially decades or longer understand time? Would an AI that cannot physically move through the world understand the world? Would it live solely on the plane of abstract ideas, and never realize that a “real” world of space and time and humans with other minds even exists?

It’s absurd to wonder about the human morality of such an entity. It’s like asking if the sound of the wind has morality.

1

echohole5 t1_jeeqfqi wrote

I don't see any indication that morality emerges with intelligence.

1

outsideisfun t1_jeh4qzm wrote

We don't have any understanding of consciousness ourselves, how can we anticipate an AI can/will achieve that state?

1

fwubglubbel t1_jecmobn wrote

> as we realize that non-human intelligence is inevitable

There is absolutely no reason to believe this. It is pure fantasy.

0

Petal_Chatoyance t1_jee54xw wrote

The only thing that could prevent it is shutting down all computer research. That isn't going to happen.

Besides, technically, non-human intelligence already exists - Koko the gorilla, for example, able to question her own existence, the meaning of her life, what death means, and various issues of morality.

There is nothing special about human intelligence, and nothing special about meat. What can be done on meat can be done on a machine substrate.

The fantasy is believing - without evidence - that there is anything magically unique, or unreplicable, about human intelligence.

2