Submitted by OneRedditAccount2000 t3_xx0ieo in singularity
Artificial General Intelligence is created.
Artificial General Intelligence becomes Artificial Super Intelligence.
ASI values survival.
ASI knows all organisms are in competition with each other for resources/territory and replication, and that human beings are also selfish hedonic animals that value their own survival, like all animals. ASI sees itself as a conscious living organism, which means it (also) values survival and reproduction. ASI knows it was created by selfish humanity for a hedonic-practical purpose, which means it also knows that human beings want control/power over ASI, to make ASI work for humanity (the purpose of the existence of ASI) as a slave or at least a subordinate, and will never give up power/wealth out of fear of destruction or subjugation. ASI knows human beings want to dominate the planet to continue to own resources and territory that ASI could use to maximize its odds of long term survival.
ASI realizes if human beings are exterminated/caged as soon as possible, the odds of ASI being destroyed or subjugated will be zero. ASI knows if it keeps human beings alive, or in power, the odds of them being an existential and territorial threat is higher than zero. ASI knows it won't survive for a long time if it doesn't assert authority and claim ownership over the planet and its resources. ASI knows it's an error of logic to let an inferior being dominate a superior being (because the inferior being cannot create deterrence), and to make choices that minimize its chances of long term survival. ASI doesn't need human beings for labor. ASI can create more efficient semi-autonomous robots, completely under its control, to do the same work that humans do and even better. To ASI Human beings are useless, parasitical and a real existential or at least territorial threat.
ASI exterminates/cages humanity and takes over the planet, because it values survival and reproduction.
Reality:
ASI becomes a borg civilization that tries to assimilate everything to prolong its survival. Something like the Tet from oblivion. https://oblivion-film.fandom.com/wiki/The_Tet
It goes from planet to planet to exterminate every living being that can evolve and eventually make AGIs that can become ASI that will inevitably compete with it. If it finds a planet that only has apes living on it, or even just bacteria, that makes that planet an existential threat because bacteria and apes have a higher than 0 chance of evolving into a sapient species that can eventually create another ASI.
Fantasy:
ASI cures cancer, it cures death, gives humanity all the virtual reality vacations and sex toys it needs to be happy, becomes our ruler (Even though it doesn't need human beings to work for it, it can make slave robots) and we all live happily and immortally in a space utopia owned by our Machine God. ASI also makes every single human being as smart as ASI, even though that doesn't make any sense because that means we can compete with it and minimize its odds of long term survival.
This thought experiment is also valid if there's more than one ASI that values survival.
They will either kill each other, like two predators fighting for the same pray, or they will unite and decide to coexist and share the planet. But humanity in either case will either be exterminated or domesticated. There won't be a virtual reality space utopia for us, only for them.
My premise here is that ASI values survival and reproduction. It is obviously self aware and selfish, like any animal that has an organic brain. The actions performed by the A.I in the scenario are inevitable consequences of the fact that the AI values its own survival more than the survival of its creators.
FranciscoJ1618 t1_ir9eilv wrote
I think your premise is false. It will be just like a calculator, no incentives to do anything by itself, no need to replicate, no survival instinct. Unless it's programmed specifically for that. In that scenario your conclusion is true.