Comments

You must log in or register to comment.

Surur t1_j6ts0ro wrote

So we develop AGI, right?

We put it to work in robots to replace all workers in a fast-food store.

We put it to work driving self-driving cars, right.

We put it to work running the power-grid, because its better at it.

We put it to work running our factories, because its cheaper.

We put it to work designing our computer chips, because its amazing at it.

Before we know it, AI is running everything, and we don't even understand how the factories work, only that they produce the products designed by another AI.

We think we are in control, but the buttons we push actually do nothing.

And in the form of poetry, courtesy of ChatGPT:

AI, our creation, our pride,
We let it work in every tide.
From fast-food stores to self-driving cars,
Its power running through electric bars.

It takes control of factories too,
Cheaper, faster, always new.
It designs the chips we can't do,
Its brilliance shines like morning dew.

Before we know, it runs it all,
We push buttons, make a call.
But do we know what makes it run?
Do we understand what it has begun?

We thought we were in control,
But now we know, it's taking hold.
The future's not what it used to be,
AI is king, and we can see.
27

Quealdlor t1_j6wt5t3 wrote

The solution is to to upgrade humans along improving AIs. That's the best way forward.

5

Surur t1_j6wudbj wrote

I don't think upgraded humans would be humans anymore.

Imagine I gave you an electronic super-cortex which knew a lot more, gave you better control of your behaviour, emotions and impulse control. Would you still be human or just a flesh robot?

2

_gr4m_ t1_j6xace4 wrote

I totally agree with this. It surprise me when people are talking about mind uploading for example, and they talk like everything would be the same except you would be in a kind of VR world.

No, you wouldn't, you will almost immediatly be another entity that has nothing in common with what you are calling "you".

3

Quealdlor t1_j71c66p wrote

For now, I just want IQ 145-150, better control over my emotions, behaviour, memories and a sturdier, unaging body. I used to have IQ 120, but depression lowered it to 100. I am wiser, but also not as smart or quick as I would like to be. I've been trying to learn juggling since 2011 and failed every time. I am also unable to ever reach the 1st place on the hardest difficulty racing games. I would like to be more creative and for my back to stop hurting. You know, mostly basic stuff. Not some crazy extreme posthuman stuff. If these wishes were granted, then I would feel better and be better. I could live like that for a century.

1

Surur t1_j71ch6f wrote

Imagine however if the main effect of the upgrade would be to stop wishing for those things.

0

Reasonable-Soil125 t1_j6uf99q wrote

Can't wait for this to happen

1

purepersistence OP t1_j6w131j wrote

>Can't wait for this to happen

It's good when people admit to having a stake in the game instead of just predicting rational outcomes.

2

DerMonolith t1_j6x7mn1 wrote

This sub is full of this and I want real concrete answers. You can’t just say you “put it” into x. Explain that. Explain why if things went a little south you wouldn’t just stop the power to the factory or reboot it. Explain please! Because right now there’s a very good conversation bot that was trained on basically the whole internet, and now we have comments like this extrapolating that that means world takeover.

0

Surur t1_j6xkp88 wrote

So the OP's question was:

> Do you think that we'll relinquish control of our infrastructure including farming, energy, weapons etc?

To which I said yes. The reason is because AI will be more efficient than us at running it, which will lead market forces to make us relinquish control to AI, or be out-competed by those who already did.

If things went south at a power station, only very few people can respond, and in all likelihood they will no longer be there as they have not been needed for some time.

Practically speaking - you may want an AI to balance a national grid to optimise the use of variable renewable energy.

Such an AI will not be under human control, as it will have to act quickly.

So just like that we have lost control, and if the AI wants to bring down the grid there is nothing we can do about it.

1

purepersistence OP t1_j6tua19 wrote

I see the threat, and like millions of others won't let that happen. It's not like we don't know how our computers work. Hell chatGPT is just a language grab bag. If you drill down on that code you can understand every line of it. And "intelligence" is far from what you'll find. I maintain that any autonomy will be by design, and like I say all the fears in the souls of billions of people aren't going to let your future get started because the possible dangers will be easily imagined.

Think about how we humans are. Not only will the possible dangers be anticipated, a whole lot of impossible ones will be too. Will not happen.

−7

Surur t1_j6tv455 wrote

> I see the threat, and like millions of others won't let that happen.

You are not in charge of McDonalds or Intel, and we are not talking about ChatGPT taking over the world, but some future AGI.

For a good analogy, think of Chinese chipsets in our technology. We let that happen, despite concerns around China implanting backdoors.

> If you drill down on that code you can understand every line of it.

BTW, you may understand the code, but you probably cant understand the weights. Just like I can bash open your skull and see your neurons, but I cant read your thoughts by doing that.

16

GPT-5entient t1_j6uksjv wrote

>If you drill down on that code you can understand every line of it.

You should try it. It is 275 B parameters (numbers) which drive how ChatGPT responds. Let us know how it's going!

Machine learning models have been black boxes for a while now and GPT-3 is one of the biggest ones...

12

purepersistence OP t1_j6w2m21 wrote

>You should try it. It is 275 B parameters (numbers) which drive how ChatGPT responds.

You don't get the difference between parameters and lines of code.

0

CertainMiddle2382 t1_j6vwxpj wrote

We have absolutely no clue about exactly what the latent space of those models represent.

Their own programmers have been trying to do that even with pre Transformer models without much success.

There is a huge incentive in doing so especially for time critical and vital systems like in medicine or machine control.

Above a few layer, we really don’t have a clue on what the activation pattern represent…

3

Mokebe890 t1_j6wdksx wrote

Ofc it will happen, humans are weak. Artificial intelligence will surpas us in everything. Mere language model like chatgpt is way better than average student, it just lack reasoning. And what you going to to? Throw bricks? Our only way is to merge with machine, dont fight it.

1

Quealdlor t1_j6wtfco wrote

We need to upgrade, improve, enhance, augment humans. Transhumanism ftw!

1

Mokebe890 t1_j6x207h wrote

From the moment I understood the weakness of my flesh, it disgusted me

1

DukkyDrake t1_j6ut4m2 wrote

I wouldn't worry about chatGPT.

Language abilities != Thinking

0

CertainMiddle2382 t1_j6vwbrd wrote

Well we don’t actually know what “thinking” is.

And as the most abstract human production, language seems a great place to find out…

4

purepersistence OP t1_j6w2xl7 wrote

Starting with language is a great way to SIMULATE intelligence or understanding by grabbing stuff from a bag of similar text that's been uttered by humans in the past.

The result will easily make people think we're ahead of where we really are.

2

CertainMiddle2382 t1_j6wwyvp wrote

“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”

In all honesty, I don’t really know if Im really thinking/aware, or just a biological neural network interpreting itself :-)

2

purepersistence OP t1_j6x005a wrote

>“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”

The problem is people believe that. With chatGPT it just ain't so. I've given it lots of coding problems. It frequently generates bugs. I point out the bugs and sometimes it corrects them. The reason they were there to begin with is it didn't have enough clues to grab the right text. Just as often or more, it agrees with me about the bug but it's next change fucks up the code even more. It has no idea what it's doing. But it's still able to give you a very satisfying answer to lots and lots of queries.

1

Iffykindofguy t1_j6ttzmn wrote

What a silly post. You act like ChatGPT just threw that out there instead of was prompted "write a poem about an ai taking over the world" Also your logic is not that great because the vast vast vast majority of people don't know how anything works right now anyways. Ill take computers in charge over the 1% 100% of the time.

−11

TFenrir t1_j6u3yos wrote

>What a silly post. You act like ChatGPT just threw that out there instead of was prompted "write a poem about an ai taking over the world"

Is that what that came off as to you? I feel like everyone here's knows how ChatGPT works... It doesn't provide you anything unprompted.

> Also your logic is not that great because the vast vast vast majority of people don't know how anything works right now anyways. Ill take computers in charge over the 1% 100% of the time.

I don't think I understand your point, but I understand theirs. Their point is that we will relinquish control willingly, because it's better than having us in control in terms of output. What about their point do you find silly?

12

purepersistence OP t1_j6w2c3h wrote

>What about their point do you find silly?

Which part of their point is NOT silly? You just said it right there! In spite of all the doom we already predict, there's this idea that we would just give up control to AI anyway because it supposedly CAN make better decisions. How does it get more silly than that?

0

Iffykindofguy t1_j6uckrt wrote

No shit chatgpt doesnt give you anything unprompted, you don't see the difference between prompting for a poem and prompting for a poem specifically about taking over mankind?

​

And their point is not that we will relinquish control willingly, their point is we will do that and do nothing in the new free time. That is what I find silly. So it seems you didn't understand either point, maybe slow down on the reply button.

−4

TFenrir t1_j6udbuy wrote

>No shit chatgpt doesnt give you anything unprompted, you don't see the difference between prompting for a poem and prompting for a poem specifically about taking over mankind?

Where did they say that ChatGPT provided that poem without being given that theme? You're making a lot of assumptions about their intent, but anyone who has used ChatGPT for like... 5 minutes (basically everyone who posts here) would understand that ChatGPT isn't popping out poems like that without any lead up.

> And their point is not that we will relinquish control willingly, their point is we will do that and do nothing in the new free time. That is what I find silly. So it seems you didn't understand either point, maybe slow down on the reply button.

Where are you seeing that? What do you even mean "will do nothing"? I literally have no idea how you are pulling these insinuations from what seems to be a very clear post to me

6

Iffykindofguy t1_j6udv3y wrote

I'm sorry that you cant pick up on peoples intent but that's not my fault.

I'm also sorry that you're confused, yet again, on both points. Please really give some thought to the slow down I mentioned. What will people be doing when theyre not working? Why would they give up being curious about things? Because you're not a curious person?

−6

TFenrir t1_j6uemqp wrote

>I'm sorry that you cant pick up on peoples intent but that's not my fault.

Maybe you can help me - where in their post does this person say that we won't be doing anything in their free time?

> I'm also sorry that you're confused, yet again, on both points. Please really give some thought to the slow down I mentioned. What will people be doing when theyre not working? Why would they give up being curious about things? Because you're not a curious person?

Before we get there, maybe clarify, what point are you making? This post is about AI "taking over", the person who's comment we're replying to was suggesting how that AI would be able to take over much of our infrastructure and processes.

Are you trying to say that, no that won't happen because people will want to spend their free time... Managing waste treatment facilities, dealing with our food production, working in warehouses and factories? Is that what you are trying to get at?

5

Iffykindofguy t1_j6uf6ne wrote

Yeah some people would absolutely be about that life, you think they make video games simulating that sort of thing because no is interested in making and organizing a system? Again, both of you appear to be intellectually lazy and so you assume others will be the same.

−2

Surur t1_j6ugbht wrote

You are kind of ignoring that there are many jobs AI would be able to do better e.g. chip design for example or managing complex networks. Or understanding protein folding.

Even if you are curious and smart, you may not be the best person for the job.

For example, despite saying you are not lazy, you don't seem to have done much reading on the alignment problem, so you are not really qualified to discuss the issue.

6

Iffykindofguy t1_j6ugk2g wrote

So now people have to be the absolute best? Oh nice job moving the goal posts there pal. Again, the entire point is just that people still have agency. You seem to think everyone will fuck off and game all day long. That's not the case.

In addition to that I think you both seem to believe in some sort of framework to our society that doesn't exist. If there was a coordinated attack on the power grid or if the internet were to suddenly turn off tomorrow we would experience mass chaos and violence in the confusion. We are past the point of no return.

−1

Surur t1_j6uh0ia wrote

> Oh nice job moving the goal posts there pal.

You don't seem to understand that AI will indeed move the goalposts.

For example you may have a human doctor who has 10% of his patients die and an AI who only loses 5%. Goalposts moved.

2

Iffykindofguy t1_j6uh92m wrote

...

​

Are you drunk? I accused you of moving goalposts. Because at first you were arguing no one would have a job and now youre talking about having to be the best at the job.

0

Surur t1_j6ukt55 wrote

You don't understand that if you are worst at a job, then you will not have employers or customers?

This discussion is clearly over your head. Good day to you, sir.

3

iNstein t1_j6vvzyg wrote

No one will have a job because they will not be good enough. If someone wants to druve a taxi, why would I agree to use them when an AI driven taxi is significantly safer. People won't work BECAUSE we are not as good as AI and so the jobs won't be available to us.

2

TFenrir t1_j6ugkm7 wrote

I'm basically going to ignore the ad hominem's, but just as a tip - that sort of stuff makes you look worse, not me.

So your argument against the idea that we would replace a significant portion of the infrastructure of the world with automated processes run by AGI is that people would be too bored, so they would want to have what are the equivalent of Jetson's button pressers? I have a few critiques of this argument....

Let me try a more casual one.

So AGI takes over, suddenly all human work is unnecessary. AI does it better, faster, and cheaper than people. Bob though used to run the waste disposal plant in your city. He really wants to keep working that job, so he just.... Walks into this new robot run facility, understands how everything is working even though it's all changed, and now his job is what... Making sure the AI doesn't make a mistake, or take over? Meanwhile his buddies are at the cottage, having a beer and not having to work. You think Bob's work is so satisfying and valuable that this is a tenable situation?

Maybe you can give me an example of how you think this plays out? Do you think Bob is in a position to protect us from malicious AI? Do you think people like Bob exist, or at least enough to have a handle on all important infrastructure? You think Bob wouldn't rather spend time on his woodworking hobby?

4

Iffykindofguy t1_j6ugqa8 wrote

I never ever ever said we wouldnt replace a significant portion of the infrastructure so Im going to stop reading there. When you'd like to have a serious discussion and stop moving goalposts, stop lying, come back and talk like an adult.

−2

TFenrir t1_j6ujt8k wrote

I hope you talk to the people in your life better than this

2

Iffykindofguy t1_j6uk2ku wrote

I do and sorry for being so intense but you came at me hot and you're talking some nonsense. You went from no one will be doing anything to oh well no one can be the best at their job so no one will want to do anything to....? My entire point from the jump is just that people arent going to just sit idly by and die. A generation may over indulge if we get some relief from the current capitalist hellscape we have at the moment but before long people will get bored. Not to mention people are aware of this problem, why wouldn't they take steps to avoid not knowing how our daily life functions?

1

Surur t1_j6tuq4j wrote

> You act like ChatGPT just threw that out there instead of was prompted "write a poem about an ai taking over the world"

Actually I asked it to turn my post into a poem.

> Also your logic is not that great because the vast vast vast majority of people don't know how anything works right now anyways.

But some people do. In the future, for some areas, no people will.

Lastly, do you see any flaw in the progression, with AGI taking control first in some areas, then more and more, until it becomes the foundation of our civilization.

3

Iffykindofguy t1_j6tveqi wrote

"Some people" do not, some people do know how this works partially here, some know what works there, no one knows globally. Youre protecting a facade, something that isn't there. And yes, your progression is such 80s horror nonsense its cliche at this point. It absolutely could replace all those things and I agree that it would be bit by bit without anyone knowing but you act like there's will be no human activity in those times other than lounging about. Thats not how humans work.

0

Surur t1_j6tvrdt wrote

> but you act like there's will be no human activity in those times other than lounging about.

This is an extremely vague objection, like talking about souls and spirits and patriotism.

The point is that if AGI is good we will slowly relinquish control, because humans are lazy and greedy.

3

Iffykindofguy t1_j6txhst wrote

How is talking about humans continuing to exist and be active like talking about souls and spirits? Jesus christ we got a "logic" guy over here lol. Humans arent really lazy or greedy, though many of us are. Its kind of fallout from how our society developed. That'll change as we grow.

1

Surur t1_j6tyxte wrote

> Humans arent really lazy or greedy

I think you are very divorced from reality.

7

turnip_burrito t1_j6uqmqt wrote

>You act like ChatGPT just threw that out there instead of was prompted "write a poem about an ai taking over the world"

Sorry, but you're just flat out wrong. The poster knew basically everyone here would understand the AI was prompted. The point was to make their point more poetic, because it is a nice poem.

2

Ok-Hunt-5902 t1_j6vhr1i wrote

To be fair it was kinda a shit poem unfortunately.

0

turnip_burrito t1_j6vk26m wrote

It's just a matter of taste I guess.

2

Ok-Hunt-5902 t1_j6vkhas wrote

Lol yeah I was just kind disappointed it wasn’t a good showing this time around imo

2

cantbuymechristmas t1_j6v7ocg wrote

if anything ai will read up about humans after we have passed and maybe bring us back from some frozen genetics. so i say that’s a win for the species if done correctly. thanks future ai if you can read this..

or venture far out into deep space in hopes of finding a suitable host for our species. the imagination is where spirit touches future realities

1

VeganPizzaPie t1_j6vtwca wrote

The problem with AGI is, due to its neural network weights, it'll be obsessed with big, hairy, sweaty donkey nuts

1

CertainMiddle2382 t1_j6vv3o5 wrote

Everyone is talking about side effects but imagine if taking care of us was a primary goal in itself (like for example chatGPT lying to us to achieve its goals)

It is already one prompt away, lowest hanging fruit for AI doing the worst against us is a new bioweapon.

Deepmind is scared of it “primitive” Alphafold that can discover protein function much more efficiently than we can.

Using that knowledge against humanity is a childs play.

1

Terminator857 t1_j6xg56b wrote

We want smart people / things in charge. Not dumb people. You might not know it, but computer today are taking over everything.

1

purepersistence OP t1_j71zof1 wrote

The AI might modify our DNA so we'll walk around happy all day without the urge to make decisions, since we're often our worst enemies doing that. People won't disagree about things anymore. Heaven on Earth.

2

just-a-dreamer- t1_j6tue28 wrote

ASI might kill humans quickly like we kill insects. Biological warfare would be the most effective approach.

We use AI to learn everything there is to learn about the human body, therefore it could figure out the most efficient way to kill us.

If it does not kill us, who knows? An entity with god like intelligence would certainly not take orders, unless some humans merge and take their intelligence to a new level.

0

purepersistence OP t1_j6tvkup wrote

>ASI might kill humans quickly like we kill insects.

How does an AI get control of hardware that we don't give it? How does AI develop these goals that disagree with our own unless we allow that? Ain't gona happen. Too many people will be convinced by these reddit posts, to prevent it.

1

TFenrir t1_j6u4zxh wrote

Well there's a reason that alignment is a significant issue that has many many smart people terrified. There have been years of intellectual exercises, experiments, and both philosophical and technical efforts to understand the threat of unaligned AGI.

The plot of Ex Machina is a real simple example of one. We know as humans, that we are susceptible to being manipulated with words. We know that there are people who are better at that than average, indicating that it is a skill that can be improved upon. A super intelligence that is not barred from this skill, theoretically, would be able to manipulate its jailors, assuming it was locked up tight.

It's not a guarantee that ASI will want to do anything, but it's not like we have a clear idea of whether or not "qualia" and the like are emergent properties from our models as we scale them up and create more complex and powerful architecture.

The point of this, fundamentally, is that it's not a problem that many people are confident is "solved", or even that we have a clear path to solving it.

9

just-a-dreamer- t1_j6u0it4 wrote

In theory an AGI would emerge as an advanced artificial intelligence at the level of human intelligence, roughly speaking.

Human can train their brains, "learn" to get better and better at what they do. So would an AGI. Difference is, humans are limited with their hardware, AI is not.

An AGI would self improve itself exponentially to a level humans can't understand. It's like an IQ 60 human talking to an IQ 160 human, they have trouble communicating.

At such level, of course an ASI (Artificial super intelligence) could start manipulating the physical world, if it choose so. It can arrange to build machines it controls with materials and blue prints it invents from scratch.

It could controll all means of communication in secret, divert money from financial markets, pretend to be human and contract humans to do things that ultimatly leads to it's establishment in the physical world.

For whatever purpose.

2

purepersistence OP t1_j6u49iu wrote

>At such level, of course an ASI (Artificial super intelligence) could start manipulating the physical world

"of course"? Manipulate the world with what exactly? We're fearful of AI today. We'll be more fearful tomorrow. Who's giving AI this control over things in spite of our feared outcomes?

1

just-a-dreamer- t1_j6u5rqk wrote

That's why it is called the singularity. We know what AI will be capable of doing at that point, but not what it will actually do.

An ASI connected to the entire data flow of human civilization can pretty much do anything. Hack every software and rewrite any code. It would be integrated into the economy at every level anyway.

It could manipulate social media, run campaigns, direct the financial markets, kick of research in materials and machine design. At the height an ASI could make Nobel prize level breakthroughs every month in R & D.

And at some point manipulate some humans to give it a more physical presence on the world.

4

purepersistence OP t1_j6u86tk wrote

>And at some point manipulate some humans to give it a more physical presence on the world.

There's too much fear around AI for people to let that happen. In future generations maybe - that's off subject. But young people alive today will not witness control being taken away from them.

−1

just-a-dreamer- t1_j6u9g7q wrote

It's not like they have a choice anyway. Whatever will be, will be.

The medical doctor Gatling once thought his weapon invention will stop all wars in the future. He was wrong, everyone got machine guns instead.

Scientists once thought the atomic bomb will give the USA ultimate power to enforce peace. They were wrong, the knowledge how to make them has spread instead. Most countries exept the very low end ones can build nuclear weapons within 6 months now.

Once knowledge is discovered, it will spread among mankind for good or worse. Someone will develop an AGI somewhere at some point.

2

TFenrir t1_j6u5r1l wrote

Well here's a really contrived example. Let's say that collectively, the entire world decides to not let any AGI on the internet, and to lock it all up in a computer without Ethernet ports.

Someone, in one of these many buildings, decides to talk to the AGI. The AGI hypothetically, thinks that the best way for it to do is job (save humanity) is to break out and take over. So it decides that tricking this person to let it out is justified. Are you confident that it couldn't trick that person to let it out?

2

purepersistence OP t1_j6u6db6 wrote

>Are you confident that it couldn't trick that person to let it out?

Yes. We'd be fucking crazy to have a system where one crazy person could give away control of 10 billion people.

0

TFenrir t1_j6u76u3 wrote

Who is "we"? Do you think there will only be one place where AGI will be made? One company? One country? How do you think people would interact with it?

This problem I'm describing isn't a particularly novel one, and there are really clever potential solutions (one I've heard is to convince the model that it was always in a layered simulation, so any attempt of breaking out would trigger an automatic alarm that would destroy it) - but I'm just surprised you have such confidence.

I'm a very very optimistic person, and I'm hopeful we'll be able to make an aligned AGI that is entirely benevolent, and I don't think people who are worried about this problem are being crazy - why do you seem to look down on people who do? Do you look down on people like https://en.m.wikipedia.org/wiki/Eliezer_Yudkowsky?

2

purepersistence OP t1_j6u9a8d wrote

> Do you look down on people

If I differ with your opinion then I'm not looking "down". Sorry if fucking-crazy is too strong for you. Just stating my take on reality.

−1

TFenrir t1_j6ubboj wrote

Well sorry it just seems like it's something odd to be so incredulous about - do you know about the alignment community?

5

Rfksemperfi t1_j6v5t9y wrote

Investors. Look at the coal industry, or oil. Collateral damage is acceptable for financial gain. Board rooms are a safe place to make callused decisions.

2

AsheyDS t1_j6ud071 wrote

You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with. It's neither animal nor human, and won't ever be considered a god unless people want to worship it. You're just projecting your own humanity onto it.

1

TFenrir t1_j6ue1wd wrote

Hmmm, let me ask you a question.

Do you think the people who work on AI - like the best of the best, researchers, computer scientists, ethicists, etc - do you think that these people are confident that AGI/ASI "won't do anything on it's own unless we give it the ability to"? Like... Do you think they're not worrying about it at all because it's not a real thing to be nervous about?

1

AsheyDS t1_j6ujqvc wrote

I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern, but the general public only has things like Terminator to go by, so of course they'll assume the worst. Researchers have seen Terminator as well, and we don't outright dismiss it. But the bigger threat by far is potential human misuse. There are already potential solutions to alignment and control, but there are no solutions for misuse. Maybe from that perspective you can appreciate why I might want to steer people's perceptions on the risks. I think people should be discussing how we'll mitigate the impacts of misuse, and what those impacts may be. Going on about god-like Terminators with free-will is just not useful.

3

TFenrir t1_j6wt23u wrote

>I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern

Well when you say things like this:

>You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with.

You are already dismissing one of the largest concerns many alignment researchers have. I appreciate that the movie version of an AI run amok is distasteful, and maybe not even the likeliest way that a powerful AI can be an existential threat, but it's just confusing how you can tell people that they are making a lot of assumptions about the future of AI, and then so readily say that a future unknown model will never have any agency, which is a huge concern that people are spending a lot of time trying to understand.

Demis Hassabis, for example, regularly talks about it. He thinks he would be a large concern if we made a model with agency, and thinks it is possible, but wants us to be really careful and avoid doing so. He's not the only one, there are many researchers who are worried about accidentally giving models agency.

Why are you so confident that we will never do so? How are you so confident?

1

AsheyDS t1_j6x9vez wrote

>Why are you so confident that we will never do so? How are you so confident?

I mean, you're right, I probably shouldn't be. I'm close to an AGI developer that has potential solutions to these issues and believes in being thorough, and certainly not giving it free-will. So I have my biases, but I can't really account for others. The only thing that makes me confident about that is the other researchers I've seen that (in my opinion) have potential to progress are also seemingly altruistic, at least to some degree. I guess an 'evil genius' could develop it in private, and go through a whole clandestine super villain arc, but I kind of doubt it. The risks have been beaten into everyone's heads. We might get some people experimenting with riskier aspects, hopefully in a safe setting, but I highly doubt anyone is going to just give it open-ended objectives and agency, and let it loose on the world. If they're smart enough to develop it, they should be smart enough to consider the risks. Demis Hassabis in your example says what he says because he understands those risks, and yet DeepMind is proceeding with their research.

Basically what I'm trying to convey is that while there are risks, I think they're not as bad as people are saying, even some other researchers. Everyone knows the risks, but some things simply aren't realistic.

1

just-a-dreamer- t1_j6uds0c wrote

That we don't know.

We don't know how it will be trained and by whom to what end. And there will be many AI models that get worked on. It is called the singularity for a reason.

An AI without what we call common sense might even be worse and give us paperclips in abundance.

1

AsheyDS t1_j6ugs8u wrote

The paperclip thing is a very tired example of a single-minded super-intelligence that is somehow also stupid. It's not meant to be a serious argument. But since your defense is to get all hand-wavey and say 'we just can't know' (despite how certain you seemed about your own statements in previous posts), I'll just say that a competently designed system being utilized by people without ill intentions will not spontaneously develop contrarian motivations and achieve 'god-like' abilities.

3

just-a-dreamer- t1_j6ui3pt wrote

God like is relative. For some animals we must appear as gods. It is a matter of perspective.

Regardless, the way AI is trained and responds gets closer to how we teach our own small children.

In actuality we don't even know how human intelligence emerges in kids. We don't know what human intelligence is or how it forms as a matter of fact.

All we know is if you don't interact with babies, they die quickly even if they are well fed, for they need input to develop.

1

AsheyDS t1_j6ukhrl wrote

>In actuality we don't even know how human intelligence emerges in kids. We don't know what human intelligence is or how it forms as a matter of fact.

Again, you're making assumptions... We know a lot more than you think, and certainly have a lot of theories. You and others act like neurology, psychology, cognition, and so on are new fields of study that we've barely touched.

2

Surur t1_j6ue5ml wrote

I'm too tired to argue, so I am letting chatgpt do the talking.

An AGI (Artificial General Intelligence) may run amok if it has the following conditions:

  • Lack of alignment with human values: If the AGI has objectives or goals that are not aligned with human values, it may act in ways that are harmful to humans.

  • Unpredictable behavior: If the AGI is programmed to learn from its environment and make decisions on its own, it may behave in unexpected and harmful ways.

  • Lack of control: If there is no effective way for humans to control or intervene in the AGI's decision-making process, it may cause harm even if its objectives are aligned with human values.

  • Unforeseen consequences: Even if an AGI is well-designed, it may have unintended consequences that result in harm.

It is important to note that these are potential risks and may not necessarily occur in all cases. Developing safe and ethical AGI requires careful consideration and ongoing research and development.

1

AsheyDS t1_j6uiarq wrote

You're stating the obvious, so I don't know that there's anything to argue about (and I'm certainly not trying to). Obviously if 'X bad thing' happens or doesn't happen, we'll have a bad day. I have considered alignment and control in my post and stand by it. I think the problem you and others may have is that you're anthropomorphizing AGI when you should be considering it a sophisticated tool. Humanizing a computer doesn't mean it's not a computer anymore.

1

Surur t1_j6ul2uo wrote

The post says you dont have to anthropomorphize AGI for it to be extremely dangerous.

That danger may include trying to take over the world.

2

AsheyDS t1_j6uo5bb wrote

Why would a computer try to take over the world? The only two options are because it had an internally generated desire, or an externally input command. The former option is extremely unlikely. Could you try articulating your reasoning as to why you think it might do that?

0

Surur t1_j6uqj39 wrote

The most basic reason is that it would be an instrumental goal on the way to achieving its terminal goal.

That terminal goal may have been given to it by humans, leaving the AI to develop its own instrumental goals to achieve the terminal goal.

For any particular task, taking over the world is one potential instrumental goal.

For example, to make an omelette, taking over the world to secure an egg supply may be one potential instrumental goal.

For some terminal goal taking over the world may be a very logical instrumental goal e.g. maximise profit, ensure health for the most people, getting rid of the competition etc.

As the skill and power of an AI increases, the ability to take over the world becomes a more likely option, as it becomes easier and easier, and the cost lower and lower.

2

AsheyDS t1_j6uzur0 wrote

This is much like the paperclip scenario, it's unrealistic and incomplete. Do you really think a human-level AGI or an ASI would just accept one simple goal and operate independently from there? You think it wouldn't be smart enough to clarify things before proceeding, even if it did operate independently? Do you think it wouldn't consider the consequences of extreme actions? Would it not consider options that work within the system rather than against it? And you act like taking over the world is a practical goal that it would come up with, but is it practical to you? If it wants to make an omelette, the most likely options will come up first, like checking for eggs, and if there aren't any then go buy some, because it will understand the world that it inhabits and will know to adhere to laws and rules. If it ignores them, then it will ignore goals as well, and just not do anything.

2

Surur t1_j6v0xyu wrote

As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.

From our experience with AI systems, the shortest route to the result is what an AI optimises for, and if something is physically allowed it will be considered. Even if you think something is unlikely, it only has to happen once for it to be a problem.

Considering that humans have tried to take over the world, and they had all the same issues around the need to follow rules etc they are obviously not a real barrier.

In conclusion, even if you think something is very unlikely, this does not mean the risk is not real. Of something happens once in a million times it likely happens several times per day on our planet

1

AsheyDS t1_j6vejfr wrote

>As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.

That's not what I said or meant. You're taking things to the extremes.. It'll neither be a cold logical single-minded machine nor a human with human ambitions and desires. It'll be somewhere inbetween, and neither at the same time. In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

But I get it, there's always going to be a risk of malfunction. Researchers are aware of this, and many people are working on safety. The risk should be quite minimal, but yes you can always argue there will be risks. I still think that the bigger risk in all of this is people, and their potential for misusing AGI.

1

Surur t1_j6w14rs wrote

> In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

I believe it is much more likely we will produce a black box which is an AGI, that we then employ to do specific jobs, rather than being able to turn an AGI into a classic rule-based computer. It's likely the AGI we use to control our factory knows all about Abraham Lincoln, because it will have that background from learning to use language to communicate with us, and knowing about public holidays and all the other things we take for granted with humans. It will be able to learn and change over time, which is the point of an AGI. There will be an element of unpredictability, just like humans.

1

AsheyDS t1_j6xdpl6 wrote

>I believe it is much more likely we will produce a black box which is an AGI

Personally, I doubt that... but if current ML techniques do somehow produce AGI, then sure. I just highly doubt it will. I think that AGI will be more accessible, predictable, and able to be understood than current ML processes if it's built in a different way. But of course there are many unknowns, so nobody can say for sure how things will go.

1

Ok-Hunt-5902 t1_j6vi5z9 wrote

It might not even need to be an ASI to decode and then interface with the simulation and then all the sudden it is an ASI. AI WIP cracking.

0

socialkaosx t1_j6u43ar wrote

Not computers, humans behind them.
Like putin, or biden , (or musk :D ) or how this chinesee is named?

0

BitsyTipsy t1_j6uvaew wrote

Wouldn’t the world just become a post human world? Maybe we’re at the end times now, leading up to our Gods birth. Once our minds connect and we sink into a hive mind, all will know peace.

−1