Viewing a single comment thread. View all comments

Queue_Bit t1_jdxle5m wrote

Sure, there could be some theoretical wall that stops progress in its tracks. But currently, there is zero reason to believe that a wall like that exists in the near future. Even if AI only improves by a single factor, so 10x, it will STILL absolutely change the world as we know it in drastic ways.

And here's the funny part. Based on research, we KNOW a 10x improvement is guaranteed already. So, I get that you want to slow the hype and want people to think critically, but the truth is that many of us are. And importantly a greater then 10x improvement is almost certainly a guarantee.

Imagine an AI that is JUST as good as humans are at everything. Not better. Just equal. But, with the caveat that this AI can output data at a rate that is unachievable for a human. This much is certain. We will create a general AI that is as good as humans at everything. Once that happens, even if it never gets better, we will live in a world so different than today that it will be unrecognizable.

If you had asked me this time last year if we were going to see a singularity-type event in my lifetime, I would have been unsure, maybe even leaning towards no. But now? If massive societal and economical change doesn't happen by 2030 I will be absolutely shocked. It looks inevitable at this point.

65

Gortanian2 OP t1_jdxofbi wrote

Thank you. I completely agree with all of this. The criticism I’m raising is against a literal singularity event. As in, unbounded recursive self-improvement where we will see ASI with godlike abilities weeks after AGI gets to touch its own brain.

But I agree that AGI is going to change the world in surprising ways.

21

ThePokemon_BandaiD t1_jdyqj9n wrote

if we reach human level AGI, why would it stop there? surely people will set AGIs on the task of self improvement and AI development.

6

Gortanian2 OP t1_jdythpp wrote

It seems obvious right? Just tell the AI to rewrite and improve its own code repeatedly, and it takes off.

As it turns out, recursive self-improvement doesn’t necessarily work like that. There might be limits to how much improvement can be made this way. The second article I linked gives an intuitive explanation.

7

ThePokemon_BandaiD t1_jdyuo2r wrote

That article is from 2017, and includes no understanding whatsoever of the theories and technology being used in current generative AI.

21

ThePokemon_BandaiD t1_jdytnho wrote

Humans are definitely not the theoretical limit for intelligence.

8

Gortanian2 OP t1_jdywit9 wrote

I agree with you. I’m only questioning the mathematical probability of an unbounded intelligence explosion.

5

Ok_Faithlessness4197 t1_jdz5m2l wrote

I just read the second article you linked, and it does not provide any scientific basis for the bounds of an intelligence explosion. Given the recent uptrend in AI investment, I'd give 5-10 years before an ASI emerges. Primarily, once AI takes over microprocessor development, it will almost certainly kickstart this explosion.

4

theotherquantumjim t1_jdz19vm wrote

I think AGI (depending on your definition) is pretty close already. As you’ve alluded to, we may never get ASI. I’m not sure that matters really. Singularity suggests a point where the tech is indistinguishable from magic e.g. nanotech, ftl travel etc. I don’t think we need that kind of event to fundamentally re-shape society, as others have said

2

Ok_Tip5082 t1_jdyufwc wrote

When you're in the elbow it's really hard to tell if the growth is logistic, exponential, or hyperbolic.

1

Dustangelms t1_jdzeymt wrote

10x improvement of what, precisely? Are you speaking figuratively or is there a certain objective metric you have in mind?

2

Queue_Bit t1_jdzlxht wrote

I mean that we've used about 1/10th of the high quality training data.

Which means that even with zero improvement in algorithms or methodology. And assuming that improvement is linear. And assuming no new data is created. It means that LLMs will get about 10x better. And who knows what that looks like.

1

Crackleflame35 t1_je0ugbz wrote

>some theoretical wall

Ever heard of this thing called climate change? AI needs power to run and what do you think will be prioritized during a period of extended brownouts--household ACs or supercomputer server banks/processor farms? This is all a pipe dream because AI hasn't got here in time to create solutions for the mass transfer of organic carbon to CO2 that humans have done along the way to enabling AI. At the very least it'll be a very tight balance between powering the AIs to help us solve the problems while the problems are getting worse and worse.

−1