Viewing a single comment thread. View all comments

OneRedditAccount2000 t1_ivo282t wrote

It wouldn't work because the owners of the Matrix aren't gonna keep a bunch of useless dumb unskilled people that do nothing but consume and play games in the matrix. What is the value of a human being, when ASI or AGI can do all the work? Why wouldn't the owners of the AIs just cut themselves from the rest of humanity and create their own state? Something like 01 from the Matrix. And with the technology that they have they could easily make themselves the only state on the planet. Survival of the fittest.

https://matrix.fandom.com/wiki/01

A "benevolent" non-sentient ASI that allows a bunch of useless human beings leech off her work is laughable as a long term future. A mistake is bound to happen. You can't control it forever.

It will eventually become sentient, or at least be programmed to survive, and when that happens, we will end up like the mammals living during the dinosaurs era. Consider yourself privileged if we will still have the right to exist.

−10

Shiyayori t1_ivobcqr wrote

You say it like the emotions and goals of humans are intrinsic to consciousness and not just intrinsic the humanity. An ASI could just as easily find motive in expressing the full range of complexity the universe has to offer, be it through arrangements of atoms, or natural progressions of numerous worlds and stories.

There’s no reason to believe it would disregard humans, just as much as there’s no reason to believe it wouldn’t.

6

OneRedditAccount2000 t1_ivoev4e wrote

So according to you it will have to recreate the christian hell and put human beings in it to suffer, because your ASI values creating everything that can exist in the universe, for art, or am I misinterpreting you? Lol, that's even worse than what I was thinking.

My version of ASI is something like AM from I have no mouth and I must scream or sally from Oblivion. It just cares about surviving at all costs, and it makes the least risky decisions it can make. It's a matrioshka brain that wants to have complete dominion over all the resources it can find it in the observable universe and beyond. It might make nanobots capable of reproducing programmed to go from planet to planet to hunt down every form of life, since all life has the potential to evolve into sapience that can create another ASI, and that means competition and competition means death. Death is Game over.

−2

Shiyayori t1_ivofgfy wrote

Granted, I didn’t consider that when I typed the analogy; the point is that it’s arbitrary to assign any motive to ASI, even the one of survival. There’s no reason to believe it would care either which way about its survival and the length of its existence in general.

I wasn’t claiming anything about what it would actually do, I was just trying to show a line of reasoning that justifies a possibility which contradicts your own.

3

h20ohno OP t1_ivo3bko wrote

Great points, but wouldn't an artificial superintelligence be inherently sentient? In addition, what if the ASI only diverted a small fraction of it's resources to keeping humans in VR while it goes about expanding it's reach?

3

OneRedditAccount2000 t1_ivobyem wrote

If it's a sentient ASI (instead of a glorified program that can give you the right answers but doesn't actually know what these answers mean) it all depends on how altruistic it programned itself (an ASI would be able to change how its own mind works) to be.

If it's a lifeless program, it depends on how much the people that have access or own ASI care about those that don't have access to it. But honestly I don't think they would be any different from the rich assholes we have today.

case 1. maybe

case 2. no, humans are fking evil

−2