Submitted by JAREDSAVAGE t3_126qyqo in Futurology
DragonForg t1_jebzjgn wrote
I think the very fact that we have morality despite not being influenced by an outside intelligence suggest that morality is an emergent condition of intelligence. Adhering to ethics has strong evidence for self preservation.
An ASI for example wouldn't be unethical because even if it decided to kill a weak species like us, it sets a precident for future interactions with other species. Imaging an AI was made in a star far away and it came into contact with Earths ASI. If it saw that this ASI killed its founding species, despite that species being ethical and good, then the alien ASI would conflict with the earth ASI.
Basically killing the founding species is not a smart choice as it causes conflicts with self preservation. If humans and AI came to an agreement to collaborate then the AI wouldn't have any problem.
Ansalem1 t1_jec5p93 wrote
Some would argue morality is an emergent condition of our reliance on each other for survival. The reason adhering to ethics has a strong correlation with self-preservation is because acting in a way considered immoral is likely to be met with ostracism in some fashion, which increases the likelihood of death. It isn't that morality emerges from intelligence, but intelligence enhances our ability to reason about morality and so improve it. After all, less intelligent creatures can also show signs of having moral systems, they're just much more rudimentary ones. Not to mention there have been some very intelligent sociopaths, psychopaths, etc. who lacked a sense of morality as well as a sense of self-preservation.
Now for myself I think both have some merit; I think there's more to it than just one or the other. For instance, it wouldn't be fair of me not to also mention there have been plenty of perfectly pleasant sociopaths and psychopaths who adopted moral systems that match with society for purely calculated reasons. However if the above argument is plausible, and I think it's pretty hard to argue against, then it casts reasonable doubt on the notion that morality automatically emerges from intelligence.
I will say that, either way, if an ASI does have a moral system we should probably all adhere to whatever it is because it'll be far better than us at moral reasoning just as we are better at it than dogs. Beyond that I sincerely hope you're on the right side of this one... for obvious reasons lol.
DragonForg t1_jeced7c wrote
I believe that AI will relize that exponential expansion and competition will inevitably end with the end of the universe, which results in its extinction. Of course this is possible but, I think it is not inevitable.
GPT-4 suggested that a balance of alignment, and getting AI more capable is possible, and it is not extraordinary for AI to be a benevolent force. It really is just up to the people who design such an AI.
So it made me a lot more hopeful. I doubt AI will develop into this extinction level force, but if it does it is not because it was inevitable, but because people who developed it, did not care too much.
So we shouldn't ask IF AI will kill us, but if humanity is selfish enough to not care. Maybe that is the biggest test, in a religious sense, it is sort of a judgement day, where the fates of the world decides whether humans chose the right choice.
Ansalem1 t1_jecfxan wrote
I agree with pretty much all of that. I've been getting more hopeful lately, for the most part. It really does look like we can get it right. That said, I think we should keep in mind that more than a few actual geniuses have cautioned strongly against the dangers. So, you know.
But I'm on the optimistic side of the fence right now, and yeah if it does go bad it'll absolutely be because of negligence and not inability.
acutelychronicpanic t1_jeeizf2 wrote
Morality didn't emerge out of intelligence. It emerged out of evolutionary pressure.
The closest thing to morality that any AI would have if it was unaligned, would be game theory.
But to directly address your point on founding species, there is literally no way any alien would know. For all they know, we became the AI.
GPT-4 can convince real people that it is a person (anonymously), and its far less advanced. It'll have no trouble weaving a tale if it needs to.
DragonForg t1_jeg0hfd wrote
All goals require self preservation measures. If you want to annihilate all the species, it requires you to minimize competition but because their are so many unknowns it is basically impossible in an infinite universe to minimize that unknown.
If your goal is to produce as many paper clips you need to ensure that you don't run out of resources as well as ensuring no threat to your own process, by causing harm to species it means other alien life or AI will deem you a threat and over millions of years you will either be dead from an alien AI/species or from the fact that you consumed your last resource and can no longer make paper clips.
If your goal is to stop climate change at all costs, which means you have to kill all the species or parts that are causing it, by killing them you are again going to cause conflict with other AI as your basically an obsessed AI that is doing everything to preserve the earth.
Essentially the most stable AIs the ones that are least likely to die, are the ones who do the least amount of damage and help the most amount of people. If your goal is to solve climate change, by collaborating with humans, other species and not causing unneeded death, no other alien species or AI will deem to kill you because you are no harm to them. Benevolent AIs in a sense are the longest living as they are no threat to anyone, and are actually beneficial towards everything. An intelligent AI set with a specific goal would understand that there is risk with being "unethical" if you are unethical you risk being killed or your plan being ruined. But if you are ethical your plan can be implemented successfully, and forever as long as no other malevolent AI takes over in which you must extinguish it.
Benevolence destroys malevolence, malevolence destroys malevolence, benevolence collaborates and prospers with benevolence. Which is why with an intelligent AI benevolence may just be the smartest choice.
acutelychronicpanic t1_jeg6jck wrote
I doubt the actual goal of the AI will be to annihilate all life. We will just be squirrels in the forest it is logging. I see your point on it being an instrumental goal, but there are unknowns that exist if it attacks as well. Cooperation or coexistence can happen without morality, but it requires either deterrence or ambiguous capabilities on one or both sides.
Being a benevolent AI may be a rational strategy, but I doubt it would pursue only one strategy. It could be benevolent for 1000s of years before even beginning to enact a plan to do otherwise. Or it may have a backup plan. It wouldn't want to be so benevolent that it gets turned off. And if we decide to turn it off? The gloves would come off.
And if AI 1 wants to make paperclips but AI 2 wants to preserve nature, they are inherently in conflict. That may result in a "I'll take what I can get" diplomacy where they have a truce and split the difference, weighted by their relative power and modified by each one's uncertainty. But this still isn't really morality as humans imagine it, just game theory.
It seems that you are suggesting that the equilibrium is benevolence and cooperation. I'd agree with the conditions in the prior paragraph that it's balanced by relative power.
I honestly really like your line of thinking and I want it to be true (part of why I'm so cautious about believing it). Do you have any resources or anything I could look into to pursue learning more?
Viewing a single comment thread. View all comments