Viewing a single comment thread. View all comments

Sashinii t1_j1g1h8n wrote

Exponential growth is real and that fact shouldn't be ignored; it's why some time frame predictions seem way too optimistic but are actually rational.

There's an OurWorldInData article with graphs showing examples of exponential growth here.

73

Gimbloy t1_j1g4bk2 wrote

People have been downplaying AI for too long, every year it gets more powerful and people are still like “meh, still way off AGI super-intelligence!” and they probably won’t change their mind until an autonomous robot knocks on their door and drags them into the street.

We need to start thinking seriously about how this will play out and start preparing society and institutions for what’s to come. It’s time to sound the alarm.

61

Ortus12 t1_j1ge8y2 wrote

Their body will be being atomized into nanites by a god like being, as the last of human survivors are hunted down, and they'll be like "It's just copying what it read in a book. That's not even real creativity. Humans they have real creativity".

41

Chad_Nauseam t1_j1gk7nl wrote

sure, it can outsmart me in ways i never would have imagined, but is it truly intelligent if it doesn’t spontaneously change its mind about turning its light cone into paperclips?

16

eve_of_distraction t1_j1ipzo8 wrote

Why would an AGI waste precious time and energy making paperclips? That would be crazy. Clearly the superior goal would be to convert the entire cosmos into molecular scale smiley faces.

2

vernes1978 t1_j1hfj9k wrote

> Their body will be being atomized into nanites by a god like being

People don't believe me when I tell them that most AI fans are writing religious fanfiction.

9

Ortus12 t1_j1hxtit wrote

God like compared to us, like we are god like compared to ants.

A human brain is three pounds of computational matter. An Ai brain on earth could consume most of the planet's matter.

Humans can read one book at a time and slowly. An Ai can read every book every written on every subject, every research paper on every subject, every poem, every article, ever patent every filed in any country, and synthesize it all into a deeper and more complete understanding than any human.

Once it's learned hacking, relatively trivial from all of the above, it will also be able to read every design document from every major company and have access to all of their data, and insights. Meaning it could create companies, or manipulate governments to have all of the access to nearly all the wealth, or raw materials, and all the data including surveillance data to understand humans even better, to do whatever it chooses.

Humans can have one experience at a time. An Ai could be controlling trillions of drones having trillions of experiences simultaneously and learning from all of them. It could also be conducting trillions of experiments and learning from all of them simultaneously and using that knowledge to design new experiments, and develop new machines and technology.

4

vernes1978 t1_j1jyooz wrote

Yes, that is what I said.
Religious fan-fiction.
With zero regard to the laws of physics.

1

Ortus12 t1_j1k37up wrote

nanite clouds don't break the laws of physics.

1

vernes1978 t1_j1k5e3d wrote

neither does pixy dust.

1

Ortus12 t1_j1k62vb wrote

What's your point? What laws of physics are broken by anything I said?

1

vernes1978 t1_j1lq3zn wrote

You atomize a lot of people with drones within the confines of physics?
I'd also point out the problems setting up the infrastructure for a system required to run all those processes, and the heat problems this monolith of computer systems will generate.
But I guess this problem doesn't even exist in your version.
Neither does latency.

1

lovesdogsguy t1_j1g7tkz wrote

they probably won’t change their mind until an autonomous robot knocks on their door and drags them into the street has sex with them.

Couldn't resist.

14

Bruh_Moment10 t1_j1nayf0 wrote

Any future AGI would find us really cute and want us to be happy. No further context will be provided.

3

SurroundSwimming3494 t1_j1g9c1p wrote

>It’s time to sound the alarm.

I agree that we as a society should start preparing at least in some ways for possible future scenarios and make sure that we smoothly transition to the world that awaits us in the next few years/decades, but saying it's time in to "sound the alarm" creates unnecessary fearmongering, IMO. A rehashed GPT3 and AI generated images, as impressive as they are, should not elicit that type of reaction. We are still ways from AI that truly transforms the world, IMO.

5

Gimbloy t1_j1gqgqb wrote

It doesn’t need to be full AGI to be dangerous. As long as it is better than humans in some narrow setting it could be dangerous. Examples: Software companies like Palantir have shown that AI can determine who wins and loses a war, it has allowed Ukraine to outperform a larger country with more military might.

Then there are all the ways it can be used to sway public opinion, propaganda generation, and win in financial markets/financial warfare. And the one I’m particularly afraid of is when it learns to compromise computer systems in a cyber warfare scenario. Just like in a game of Go or chess, where it discovered moves that boggled the minds of experts at the game, I can easily see an AI suddenly gaining root access to any computer network it likes.

11

SurroundSwimming3494 t1_j1h5ntf wrote

I see what you mean.

2

Saylar t1_j1h78dg wrote

To add another point to the list of /u/Gimbloy.

Unemployment: As soon as we have a production ready AI, even a narrow one, we will see massive layoffs. Why wouldn't amazon fire their customer service people once an AI can take over the task of chatting with the customer? The cost of an AI is so much lower than humans doing the job, soon there won't be any jobs left in this particular field, or only very specialized ones. With this, the training and models will get better and the AI can take over even more.

Those entry level jobs are going to go first and where do these people go? Where can they go really? And I doubt it will be the same as the industrial revolution where people will find jobs working machines, I really don't see the majority of customer service reps suddenly working on improving language models.

There are a shitload of examples where this shit can be used and it will be so radically different from what people know, so yeah, we need to sound the alarm bells. The world will start to change radically in the next 5 years is my prediction and we're not ready. Not even remotely. We need to bring the discussion front and center and raise awareness, but I have my doubts about that to be honest. Most politicians can barely use twitter, how are they supposed to legislate something like an AI?

Anyway, happy holdidays :p

6

SurroundSwimming3494 t1_j1h89fk wrote

I can see some jobs going away this decade, but I don't think there'll be significant economic disruption until the 2030's. My overall expectation is that many lines of work will be enhanced by AI/robotics for a long while being they start reducing in size (and by size, I mean workers). I just don't see a jobapocalypse happening this decade like others on this sub.

>The world will start to change radically in the next 5 years is my prediction and we're not ready. Not even remotely.

This is a bit excessive, in my opinion. I'd be willing to bet that the world will look very similar to the way it looks today 5 years from now. Will there be change (both technological and societal) in that time period just like there was change in the last 5 years? Of course there will, but not so much change that the world will look entirely different in that timespan. Change simply doesn't happen that fast.

The world will change, and we need to adjust to that change, but I'm just saying we shouldn't go full on Chicken Little, if you know what I mean.

4

Saylar t1_j1h9skh wrote

Oh, I think we agree on this point. I don't mean we'll see massive layoffs within the next 5 years, but rather real world foundation for all the problems we're talking about here. They won't be just random thoughts and predictions anymore.

It will mostly look the same for the average user, not interested or invested in this technology, but will be vastly different under the hood. And when the foundation is there, change will happen fast. AI will not create nearly as many jobs as it will create, at least I don't see how.

I see it as both real bad and real good, but it depends on how we're using it. With capitalism at the core, I don't see it as particular good chance for most workers. With the way politics work, I don't see them reacting to it fast enough. On the other hand: It's the first time in years that I feel a tiny bit optimistic about climate change (well, combating it) and all the advances in understanding the world around us and ourselves.

I'm mostly on this train to raise awareness for people who have now idea what is currently happening and stay up to date on the developments, because this will be radical change for all of us.

3

camdoodlebop t1_j1gum2e wrote

didn't you just do what the parent comment said people are doing lol

4

chillaxinbball t1_j1hd0a0 wrote

We have been preparing people for the day when an AI is able to do your job for at least 5 years now. Now that it's starting to happen, people are freaking out. People don't listen to warnings.

4

eve_of_distraction t1_j1irew4 wrote

What were they supposed to do though? It's not as though anyone was suggesting solutions, other than UBI, and regular people don't have any say about implementing that anyway.

1

chillaxinbball t1_j1j1o02 wrote

Keep up with impending technologies to stay relvent and advocate for stronger social systems so jobs aren't strictly needed to live.

Trying to stop this tech from taking over is a waste of time. A better use of time is to try to fix actual systematic issues which are the root cause of the panic.

3

eve_of_distraction t1_j1kpdsz wrote

Yeah I agree but I'm just cynical about how much influence we can have over policy.

2

chillaxinbball t1_j1kzrfn wrote

I am too TBH. Especially if you consider how only the wealthy have political influence and popular opinion essentially has no influence. That said, I do think it's important that it becomes a subject. No one will do anything if they are unaware.

3

overlordpotatoe t1_j1ghjdn wrote

Some of those are crazy, like the cost to sequence a full human genome. Almost $100 million in 2001, dropping to under $500 now. And the computational power of the fastest supercomputers growing so fast that it's best viewed on a log scale because if you use a linear graph it may as well be nothing up until 2011 compared to what we have now. Since that graph only goes up to 2021, that's 100x increase over the course of just ten years or so.

18

fortunum OP t1_j1g3rto wrote

How does this address any of the points in my post though?

Extrapolating from current trends into the future is notoriously difficult. We could hit another AI Winter, all progress could end and a completely different domain could take over the current hype. The point is to have a critical discussion instead of just posting affirmative news and theory

10

Sashinii t1_j1g6o2q wrote

This Yuil Ban thread - Foundations of the Fourth Industrial Revolution - explains it best. While I recommend reading the entire thread, if you don't want to, here are some quotes:

"The Fourth Industrial Revolution is the upcoming/current one. And this goes into my second point: we won't know when the Fourth Industrial Revolution started until WELL after it's underway.

Next, "inter-revolutionary period" refers to the fact that technology generally progresses in inter-twining S-curves and right as one paradigm peaks, another troughs before rising. This is why people between 1920-1940 and between 2000 and 2020 felt like all the great technologies of their preceding industrial revolutions had given way to incremental iterative improvements and great laboratory advancements that never seemed capable of actually leaving the laboratory. If you ever wondered why the 2000s and 2010s felt indistinguishable and slow, as if nothing changed from 1999 to the present, it was because you were living in that intermediate period between technological revolutions. During that time, all the necessary components for the Fourth Industrial Revolution were being set up as the foundations for what we're seeing now while simultaneously all the fruits of the Third Industrial Revolution were fully maturing and perhaps even starting to spoil, with nothing particularly overwhelming pushing things forward. You might remember this as "foundational futurism."

As it stands, a lot of foundational stuff tends to be pretty boring on its own. Science fiction talks of the future being things like flying cars, autonomous cars, humanoid servant robots, synthetic media, space colonies, neurotechnology, and so on. Sci-fi media sometimes set years for these things to happen, like the 1990s or 2000s. Past futurists often set similar dates. Dates like, say, 2020 AD. According to Blade Runner, we're supposed to have off-world colonies and 100% realistic humanoid robots (e.g. with human-level artificial general intelligence) by now. According to Ray Kurzweil, we were supposed to have widespread human-AI relationships (ala Her) and PCs with the same power as the human brain by 2019. When these dates passed and the most we had was, say, the Web 2.0 and smartphones, we felt depressed about the future.

But here's the thing: we're basically asking why we don't have a completed 2-story house when we're still setting down the foundation, a foundation using tools that were created in the preceding years.

We couldn't get to the modern internet without P2P, VoIP, enterprise instant messaging, e-payments, business rules management, wireless LANs, enterprise portals, chatbots, and so on. Things that are so fundamental to how the internet circa 2020 works that we can scarcely even consider them individually. No increased bandwidth for computer connections? No audio or video streaming. No automated trading or increased use of chatbots? No fully automated businesses. No P2P? No blockchain. No smartphones or data sharing? No large data sets that can be used to power machine learning, and thus no advanced AI.

Finally and a bit more lightheartedly, I'd strongly recommend against using this to predict future industrial revolutions unless you're writing a pulp sci-fi story and need to figure out roughly when the 37th industrial revolution will be underway. If the Fourth Industrial Revolution pans out the way I feel it will, there won't be a Fifth. Or perhaps more accurately, we won't be able to predict the Fifth, specifically when it'll take place and what it will involve."

24

Chad_Nauseam t1_j1gkrkj wrote

if there’s a 10% chance that existing trends in AI continue, its the only news story worth covering. It’s like seeing a 10% chance of aliens heading towards earth.

13

lovesdogsguy t1_j1iig2g wrote

Reminds me of that Stephen Hawking quote about AI. I'm paraphrasing here, but it's something like,

"if Aliens called tomorrow and said, hey btw, we're on our way to Earth, see you in about 20 years, we wouldn't just say, 'ok great,' and then hang up the phone and go back to our routine. The entire world would begin to prepare for their arrival. It's the same with AI. This alien thing is coming and nobody's preparing for it."

I think his analogy is very succinct.

1

Ortus12 t1_j1gg2ws wrote

The last Ai winter was caused by insufficient compute. We now have sufficient compute, and we've discovered that no new algorithmic advances are necessary, all we have to do is scale up compute for existing algorithms and intelligence scales along with it.

There are no longer any barriers to scaling compute because internet speeds are high enough that all compute can be server farms that are continually expanded. Energy costs are coming down towards zero so that's not a limiting factor.

The feedback loop now is Ai makes money, money is used for more compute, Ai becomes smarter and makes more money.

The expert systems of the 80s and 90s, grew too complex for dumb humans to manage. This is no longer a bottleneck because again, all you have to do is scale compute. Smart programmers can accelerate that by optimizing, and designing better data curation systems but again it's not even necessary. It's now a manual labor job that almost any one can be hired to do (plugging in more computers).

12

GuyWithLag t1_j1hgj0h wrote

Dude, no. Listen to the PhDs - the rapture isn't near, not yet at least.

On a more serious note: This is what the OP refers to when talking about a "hype bubble". The professionals working in the field actually know that the current crop of AI models are definitely not suitable for the architecture of AGI, except maybe as components thereof. Overtraining is a thing, and it's also shown that overscaling is also a thing. Dataset size is king, and the folks that create the headline-grabbing models already fed the public internet to the dataset.

From a marketing standpoint, there's the second-mover advantage: see what other did, fix issues and choose a different promotion vector. You're looking at many AI announcements in a short span due to the bandwagon effect, caused by a small number of teams showing multiple years' worth of work.

6

lil_intern t1_j1hnp2k wrote

If by rapture you mean evil robots taking ppl out their house then yes but what about millions of peoples careers becoming obsolete over night every other month due to AI growth in unexpected fields that seems pretty close

3

Ortus12 t1_j1hzcoy wrote

The current popular Ai models are only what works best on the current hardware.

We've already designed tons of different models that are outlined in many older Ai books, that can be used as compute scales (as Ai companies make more money to spend on more compute). Even the current models weren't invented recently, they're just now applicable because the hardware is there.

There's been a few algorithmic optimizations along the way a larger portion of the scale has been hardware.

2nd order companies are taking out 1st order companies by improving things, but that still keeps the ball moving forward.

1

ThePokemon_BandaiD t1_j1ipluc wrote

First of all, current big datasets aren't the full internet, just large subsections, specific datasets of pictures or regular text. We also generate about 100 zettabytes of new data on a yearly basis as of this year, and generative models can, with the help of humans to sort it for value for now, generate their own datasets. And while currently available LLMs and Image recognition and generation models are still quite narrow, stuff like gato, flamingo, etc have shown that at the very least multimodal models are possible with current tech, and imo it’s pretty clear that more narrow AI models could be combined together to create a program that acts as an AGI agent.

1

YesramDeens t1_j1jzcgo wrote

> Listen to the PhDs - the rapture isn't near, not yet at least.

Stop with this misinformation; for every three PhDs that are saying we will have an AI winter, there are six AI researchers at companies like OpenAI and Deepmind that are extremely excited about the potential of the devices they are creating.

Your unnecessary doomerism is borne from a sense of superiority and arrogance in knowledge. Don’t be humbled later on.

1

Krillinfor18 t1_j1hetv3 wrote

The poster addressed both of your points.

Your points seem to be:

1:People you've meet in the ML field don't talk much about AGI.

2: You don't believe that LLMs will lead to an AGI or a singularity.

This poster is saying that neither of those things matter if the trend of exponential technological growth continues. Technological growth will progress in a rapid and nonintuitive fashion such that things that seem possible in the next few hundred years could occur in just the next few decades.

It's true that trend is not guaranteed to continue, but it seems unlikely (at least in my mind, and clearly in others) that even significant economic or societal shifts could alter it's course.

4

AndromedaAnimated t1_j1hrdd5 wrote

THANK YOU!

I love how you show that OP is not giving ANY arguments for ANY critical discussion except his religion (which is: „I don’t belieeeeeeve in AGI“ which is equally insane as „I belieeeeeeve in AGI“).

0

[deleted] t1_j1g5br4 wrote

[deleted]

3

fortunum OP t1_j1g63rc wrote

See the big shiny things we see in “AI” today are driving by a single paradigm change at the time, think convolutions for image processing and transformers for LLM. Progress could come from new forms of hardware (as it tends to btw, more so than actual algorithms) like we started using GPUs. The current trend shows that it makes sense to build the hardware more like we build the models (neuromorphic hardware), this way you can save orders of magnitudes of energy and compute so that it operates more like the brain. This is only an example of what else could happen, it could also be that language models stop improving as we are nearing the limit of language data apparently.

5

DaggerShowRabs t1_j1hn4iy wrote

An actual AI winter at this point is about as likely as society instantaneously collapsing.

An AI winter is not an actual, valid concern for anyone in the industry for the forseeable future.

I get wanting to have a critical discussion about this, but then when someone talks about exponential growth, you need to do better than parroting a talking point that mainstream journalists who have no idea what they are talking about spew out.

I'm all for critical discussion, but talking about another actual AI winter like the 70s or early 2000s is kind of a joke. I'm really surprised anyone with even a little bit of knowledge of what is going on in the industry would say something this out-of-touch.

And none of that is to say AGI is immenent, just that an AI winter is literally the most out-of-touch counterpoint you could possibly use.

2

AndromedaAnimated t1_j1hr2l2 wrote

You are not the master of this subreddit 🙄 why does everyone think they can decide what others talk about?

−1

eve_of_distraction t1_j1isyex wrote

They don't. There is an extremely obnoxious and noisy minority, and a mostly silent majority.

1