OneRedditAccount2000

OneRedditAccount2000 t1_iw3wa3m wrote

I can because I know what it values, it value survival, and I just put it in a situation with only two choices and only one solution. Move/run or do something other than moving/running. It can only survive by choosing to run. It can think many thoughts I cannot predict, but in that situation it has to use a thought that I can also understand, granted I probably can't understand 99,99... percent of its thinking

If you put the A.I in that cage, tell me, is it gonna get eaten by the tiger? Is it gonna choose to do literally everything else other than running: jump, do nothing, look in the sky, dance, shout whatever or is it actually going to run in the cage because it doesn't want t o fucking die?

1

OneRedditAccount2000 t1_iw3ohxk wrote

Where in my comment have I said that it will be perfectly predictable?

Why are you disagreeing for the sake of disagreeing.

Okay let's say I put you in a cage with a tiger, and your dog. This tiger isn't even a real tiger, it's a magical tiger that only kills people who don't move, you know it because I told you that before putting you in the cage. Your dog also knows that for the sake of the thought experiment, but leaving that aside, he's a normal dog. He can't play chess and the guitar and think about the meaning of life like you can.

What are you going to do now? You will think so many thoughts that your dog couldn't predict, but you will still have to use the same thought of "running" that your dog with an inferior intellect will also use, because you value not being eaten by my tiger.

You're in a binary situation. There's only one solution.

you can't use your superior intellect, it's of no use in that situation

you move or you die

do you die for the sake of looking cool and unpredictable to your dog?

AI and humans live in the same universe and both have to respect the laws of nature of this universe

1

OneRedditAccount2000 t1_iw3gk4q wrote

I don't think they overestimate it. They just think that 30 years from now they will all be living in a dyson sphere as immortal digital minds and have all the goods that can exist in the universe for free.

Looks like a realistic outcome to me, considering today we have A.I that can beat you at chess and draw you a mountain from text, what do you think?

I'm joking lmao

these people are beyond delusional

−2

OneRedditAccount2000 t1_iw3bl2w wrote

A superinteigence could be limited by circumstance and only have a finite decision space made of two choices. It can think thoughts your tiny human brain wouldn't think in a billion years, but the AI wouldn't be completely unpredictable under all circumstances.

Think of it like this: you're smarter than your dog, right? You can think thoughts your dog can't. You have more thinking power. Just like AI has more thinking power than youm

But if both you and your dog are being chased by a tiger, and there's no other way to survive, both of you will make the same choice: running, because you both want to survive. Maybe you can run in a way your dog can't, but you'll still be running.

I've been called a moron on this sub so many times (they all deleted their comment lol), but you people can't even get basic logic right. You re parroting a sentence you haven't even bothered to attempt to scrutinize, just because an authority figure said it

2

OneRedditAccount2000 t1_iw39g14 wrote

An economy is of instrumental value for the individuals that benefit from it.

AI will satisfy the needs of its creators, hence they won't need an economy made of human workers anymore.

An AI can give you anything a human can give and more

bread, music, houses, sex ,hot water, a dyson swarm, eternal life

literally everything that's made of atoms

1

OneRedditAccount2000 t1_iw2uxsv wrote

You will have to satisfy their needs and police them forever if they reproduce.

Why would you want to be tied to them forever?

It's like taking care of the needs of every wild animal in this world, you'd rather not

and humans occupy useful space on the planet,

they can rebel and be a nuissance to your establishment etc.

If you turn them into robotsz hlthat makes no sense

its better just to make robots

1

OneRedditAccount2000 t1_iw2thta wrote

Or they can just tell AI to make a pill that makes all humans infertile, and only satisfy the needs of the last generation of humans. After that the planet js all theirs.

Time is also a resource.

Extinction is inevitable. Why can't people understand thay? LoL meat won't rule forever.

meat is simply obsolete.

0

OneRedditAccount2000 t1_iw2qkaa wrote

Yeah the people that made or own ASI will too be killed by ASI eventually

Some of then will want to put their consciousness into a computer, and as that happens they will have an advantage over everyone else that will still be in a biological body

It could be that a sentient ASI will be created out of the human desire for immortality.

edit*

The first person to transition from bio to silicon will want to kill everyone else in their group before they too can transition

0

OneRedditAccount2000 t1_iw2lxnh wrote

A government or state/nation isn't made of magic, it's made of people that can be killed, or corrupted by those who will own ASI, or by ASI itself if it becomes sentient.

At the end of the day it's might makes right. All rules are just social constructs enforced by consequences that try to deter those who want to break them.

4

OneRedditAccount2000 t1_ivpzcz8 wrote

you can't use snartphones to take over the resources of the whole observable universe, and live forever as a digital mind, so you'd rather sell them

you can do that with ASI, so the first ones that make it are motivated to keep it

I had this thought

that means the creators of ASi will also think this exact thought, and may act on it (if it makes sense to them. It definitely makes sense to me)

good news: I'm too dumb to make an ASI, so don't lose sleep over it lol

1

OneRedditAccount2000 t1_ivpu8vp wrote

When you have an "alpha go" that can do every activity/skill a human knows to do, including an imitation of the internal thinking of that human, then you have something that you can call AGI, or close enough to AGI. I think we won't see a perfect AGI until the next century. There's just too much complexity to a human brain to emulate all of that with the technology we currently have. Like I said, we can't even make a cockroach AGI. A fucking cockroach. The dumbest animal on this planet.

1

OneRedditAccount2000 t1_ivps7ja wrote

I think the whole incentive of making AI is to create a program that can answer questions, without actually understanding the intelligent answers it's producing, otherwise you would be creating a slave

Of course real intelligence is consciousness, the understanding of the answer, not in merely finding the answer

but that's not what A.I research is for. They're trying to imitate consciousness, not recreate it

Think of how alpha go can beat you at chess/go, but it doesn't really understand the moves it is making

It's an imitation of thinking, without actually doing real thinking

1

OneRedditAccount2000 t1_ivp7q4i wrote

I think I said in one of my threads that the owners of the ASI without consciousness could be the reason why we end up like the dinosaurs. They can have a whole planet to themselves.

Human tribes have always fought for power, influence, resources and dominion in general over what's valuable and what's there to be used and controlled. History repeats itself.

3

OneRedditAccount2000 t1_ivp0hig wrote

Oh hell no. It's inevitable it will happen. Intelligence really isn't a special magical thing that can't be understood. Even nowadays all it takes to create intelligence is a vagina and a penis. People that think it's impossible must be religious or believe intelligence is something supernatural. I don't believe in that nonsense. Intelligence is just a configuration of atoms, and it can totally be recreated and manufactured.

I'm a pessimist, I'm not delusional

I do believe when it will be created, homo sapiens won't dominate the world anymore. It will be the end of organic intelligent life for sure

3

OneRedditAccount2000 t1_ivoytdi wrote

yeah, and they all think when it happens society will magically not turn into chaos, there won't be any civil wars, and they will all for some reason be given access to the technology, even those living in third world countries, and live forever in a vr utopia maintained by the "sky-daddy" asi that does all the work for the human leeches, and nothing wrong will happen billions of years after that

they think life is a fairy tale or something

−4

OneRedditAccount2000 t1_ivoev4e wrote

So according to you it will have to recreate the christian hell and put human beings in it to suffer, because your ASI values creating everything that can exist in the universe, for art, or am I misinterpreting you? Lol, that's even worse than what I was thinking.

My version of ASI is something like AM from I have no mouth and I must scream or sally from Oblivion. It just cares about surviving at all costs, and it makes the least risky decisions it can make. It's a matrioshka brain that wants to have complete dominion over all the resources it can find it in the observable universe and beyond. It might make nanobots capable of reproducing programmed to go from planet to planet to hunt down every form of life, since all life has the potential to evolve into sapience that can create another ASI, and that means competition and competition means death. Death is Game over.

−2

OneRedditAccount2000 t1_ivobyem wrote

If it's a sentient ASI (instead of a glorified program that can give you the right answers but doesn't actually know what these answers mean) it all depends on how altruistic it programned itself (an ASI would be able to change how its own mind works) to be.

If it's a lifeless program, it depends on how much the people that have access or own ASI care about those that don't have access to it. But honestly I don't think they would be any different from the rich assholes we have today.

case 1. maybe

case 2. no, humans are fking evil

−2

OneRedditAccount2000 t1_ivo282t wrote

It wouldn't work because the owners of the Matrix aren't gonna keep a bunch of useless dumb unskilled people that do nothing but consume and play games in the matrix. What is the value of a human being, when ASI or AGI can do all the work? Why wouldn't the owners of the AIs just cut themselves from the rest of humanity and create their own state? Something like 01 from the Matrix. And with the technology that they have they could easily make themselves the only state on the planet. Survival of the fittest.

https://matrix.fandom.com/wiki/01

A "benevolent" non-sentient ASI that allows a bunch of useless human beings leech off her work is laughable as a long term future. A mistake is bound to happen. You can't control it forever.

It will eventually become sentient, or at least be programmed to survive, and when that happens, we will end up like the mammals living during the dinosaurs era. Consider yourself privileged if we will still have the right to exist.

−10

OneRedditAccount2000 t1_irmqzl7 wrote

I think I get it, Idealism, right? We all live in our mental bubble, but these mental bubbles can interact with other mental bubbles, is that what you mean? I mean scientifically it's already true that you've only ever perceived the data that your brain interprets from your senses. Idealism goes even further than that and states that there's only the data stream without the physical world (And there is no brain either).

These" theories" are indeed fascinating but we have to be honest with ourselves, they're also unfalsifiable.

1