Viewing a single comment thread. View all comments

Gimbloy t1_j1g4bk2 wrote

Reply to comment by Sashinii in Hype bubble by fortunum

People have been downplaying AI for too long, every year it gets more powerful and people are still like “meh, still way off AGI super-intelligence!” and they probably won’t change their mind until an autonomous robot knocks on their door and drags them into the street.

We need to start thinking seriously about how this will play out and start preparing society and institutions for what’s to come. It’s time to sound the alarm.

61

Ortus12 t1_j1ge8y2 wrote

Their body will be being atomized into nanites by a god like being, as the last of human survivors are hunted down, and they'll be like "It's just copying what it read in a book. That's not even real creativity. Humans they have real creativity".

41

Chad_Nauseam t1_j1gk7nl wrote

sure, it can outsmart me in ways i never would have imagined, but is it truly intelligent if it doesn’t spontaneously change its mind about turning its light cone into paperclips?

16

eve_of_distraction t1_j1ipzo8 wrote

Why would an AGI waste precious time and energy making paperclips? That would be crazy. Clearly the superior goal would be to convert the entire cosmos into molecular scale smiley faces.

2

vernes1978 t1_j1hfj9k wrote

> Their body will be being atomized into nanites by a god like being

People don't believe me when I tell them that most AI fans are writing religious fanfiction.

9

Ortus12 t1_j1hxtit wrote

God like compared to us, like we are god like compared to ants.

A human brain is three pounds of computational matter. An Ai brain on earth could consume most of the planet's matter.

Humans can read one book at a time and slowly. An Ai can read every book every written on every subject, every research paper on every subject, every poem, every article, ever patent every filed in any country, and synthesize it all into a deeper and more complete understanding than any human.

Once it's learned hacking, relatively trivial from all of the above, it will also be able to read every design document from every major company and have access to all of their data, and insights. Meaning it could create companies, or manipulate governments to have all of the access to nearly all the wealth, or raw materials, and all the data including surveillance data to understand humans even better, to do whatever it chooses.

Humans can have one experience at a time. An Ai could be controlling trillions of drones having trillions of experiences simultaneously and learning from all of them. It could also be conducting trillions of experiments and learning from all of them simultaneously and using that knowledge to design new experiments, and develop new machines and technology.

4

vernes1978 t1_j1jyooz wrote

Yes, that is what I said.
Religious fan-fiction.
With zero regard to the laws of physics.

1

Ortus12 t1_j1k37up wrote

nanite clouds don't break the laws of physics.

1

vernes1978 t1_j1k5e3d wrote

neither does pixy dust.

1

Ortus12 t1_j1k62vb wrote

What's your point? What laws of physics are broken by anything I said?

1

vernes1978 t1_j1lq3zn wrote

You atomize a lot of people with drones within the confines of physics?
I'd also point out the problems setting up the infrastructure for a system required to run all those processes, and the heat problems this monolith of computer systems will generate.
But I guess this problem doesn't even exist in your version.
Neither does latency.

1

lovesdogsguy t1_j1g7tkz wrote

they probably won’t change their mind until an autonomous robot knocks on their door and drags them into the street has sex with them.

Couldn't resist.

14

Bruh_Moment10 t1_j1nayf0 wrote

Any future AGI would find us really cute and want us to be happy. No further context will be provided.

3

SurroundSwimming3494 t1_j1g9c1p wrote

>It’s time to sound the alarm.

I agree that we as a society should start preparing at least in some ways for possible future scenarios and make sure that we smoothly transition to the world that awaits us in the next few years/decades, but saying it's time in to "sound the alarm" creates unnecessary fearmongering, IMO. A rehashed GPT3 and AI generated images, as impressive as they are, should not elicit that type of reaction. We are still ways from AI that truly transforms the world, IMO.

5

Gimbloy t1_j1gqgqb wrote

It doesn’t need to be full AGI to be dangerous. As long as it is better than humans in some narrow setting it could be dangerous. Examples: Software companies like Palantir have shown that AI can determine who wins and loses a war, it has allowed Ukraine to outperform a larger country with more military might.

Then there are all the ways it can be used to sway public opinion, propaganda generation, and win in financial markets/financial warfare. And the one I’m particularly afraid of is when it learns to compromise computer systems in a cyber warfare scenario. Just like in a game of Go or chess, where it discovered moves that boggled the minds of experts at the game, I can easily see an AI suddenly gaining root access to any computer network it likes.

11

SurroundSwimming3494 t1_j1h5ntf wrote

I see what you mean.

2

Saylar t1_j1h78dg wrote

To add another point to the list of /u/Gimbloy.

Unemployment: As soon as we have a production ready AI, even a narrow one, we will see massive layoffs. Why wouldn't amazon fire their customer service people once an AI can take over the task of chatting with the customer? The cost of an AI is so much lower than humans doing the job, soon there won't be any jobs left in this particular field, or only very specialized ones. With this, the training and models will get better and the AI can take over even more.

Those entry level jobs are going to go first and where do these people go? Where can they go really? And I doubt it will be the same as the industrial revolution where people will find jobs working machines, I really don't see the majority of customer service reps suddenly working on improving language models.

There are a shitload of examples where this shit can be used and it will be so radically different from what people know, so yeah, we need to sound the alarm bells. The world will start to change radically in the next 5 years is my prediction and we're not ready. Not even remotely. We need to bring the discussion front and center and raise awareness, but I have my doubts about that to be honest. Most politicians can barely use twitter, how are they supposed to legislate something like an AI?

Anyway, happy holdidays :p

6

SurroundSwimming3494 t1_j1h89fk wrote

I can see some jobs going away this decade, but I don't think there'll be significant economic disruption until the 2030's. My overall expectation is that many lines of work will be enhanced by AI/robotics for a long while being they start reducing in size (and by size, I mean workers). I just don't see a jobapocalypse happening this decade like others on this sub.

>The world will start to change radically in the next 5 years is my prediction and we're not ready. Not even remotely.

This is a bit excessive, in my opinion. I'd be willing to bet that the world will look very similar to the way it looks today 5 years from now. Will there be change (both technological and societal) in that time period just like there was change in the last 5 years? Of course there will, but not so much change that the world will look entirely different in that timespan. Change simply doesn't happen that fast.

The world will change, and we need to adjust to that change, but I'm just saying we shouldn't go full on Chicken Little, if you know what I mean.

4

Saylar t1_j1h9skh wrote

Oh, I think we agree on this point. I don't mean we'll see massive layoffs within the next 5 years, but rather real world foundation for all the problems we're talking about here. They won't be just random thoughts and predictions anymore.

It will mostly look the same for the average user, not interested or invested in this technology, but will be vastly different under the hood. And when the foundation is there, change will happen fast. AI will not create nearly as many jobs as it will create, at least I don't see how.

I see it as both real bad and real good, but it depends on how we're using it. With capitalism at the core, I don't see it as particular good chance for most workers. With the way politics work, I don't see them reacting to it fast enough. On the other hand: It's the first time in years that I feel a tiny bit optimistic about climate change (well, combating it) and all the advances in understanding the world around us and ourselves.

I'm mostly on this train to raise awareness for people who have now idea what is currently happening and stay up to date on the developments, because this will be radical change for all of us.

3

camdoodlebop t1_j1gum2e wrote

didn't you just do what the parent comment said people are doing lol

4

chillaxinbball t1_j1hd0a0 wrote

We have been preparing people for the day when an AI is able to do your job for at least 5 years now. Now that it's starting to happen, people are freaking out. People don't listen to warnings.

4

eve_of_distraction t1_j1irew4 wrote

What were they supposed to do though? It's not as though anyone was suggesting solutions, other than UBI, and regular people don't have any say about implementing that anyway.

1

chillaxinbball t1_j1j1o02 wrote

Keep up with impending technologies to stay relvent and advocate for stronger social systems so jobs aren't strictly needed to live.

Trying to stop this tech from taking over is a waste of time. A better use of time is to try to fix actual systematic issues which are the root cause of the panic.

3

eve_of_distraction t1_j1kpdsz wrote

Yeah I agree but I'm just cynical about how much influence we can have over policy.

2

chillaxinbball t1_j1kzrfn wrote

I am too TBH. Especially if you consider how only the wealthy have political influence and popular opinion essentially has no influence. That said, I do think it's important that it becomes a subject. No one will do anything if they are unaware.

3