Comments

You must log in or register to comment.

Bataranger999 t1_irgd2qo wrote

I seriously think we'll have something on the AGI scale by between 2027-2033

39

keefemotif t1_irjsdqv wrote

I did some work on estimates of this in 2011 and many very smart people estimated 5-10 years. I think there's a strong cognitive bias around that time period. It's easy for people, especially young people, to see a personal nonlinear event happening in around "5-10 years" and 20,30 years are harder to conceptualize. It's easy to make a sacrifice today to pay for a new car or engagement ring in 5 years, but harder to plan for retirement in 30.

Yes, making really huge neural nets with GPT-3, DALL-E etc is causing a nonlinear event. Extrapolating that the nonlinearity will continue until singularity without justification is a dangerous modeling error. Consider sigmoidal functions and how they show up in everything from predator prey dynamics, bacterial conjugation, etc. Those have a nonlinearity that subsides when a balance is reached.

I think the probability of a singularity in any given year increases each year, but it's going to be a stacked sigmoidal function as different performance bottlenecks are reached. I don't think any significant chance in the next 5 years unless it comes from top secret government labs somewhere, but I think the interconnect speed is still too long. I think it will require some kind of advanced neuromorphic memsistor system, maybe in the form of tensor chips in phones if a distributed model is possible.

8

Clen23 t1_iri8f95 wrote

that's very specific haha

5

DungeonsAndDradis t1_iriso1q wrote

Ray Kurzweil, a very famous futurist with emphasis on artificial intelligence, believes we'll have Artificial General Intelligence (an AI agent that is generally as intelligent as a human knowledge worker) by 2029.

So this estimate is right in line with Kurzweil's, among others', estimates.

9

phoebemocha t1_iriz8zz wrote

that's 5 to 10 years💀

2

Clen23 t1_irj76wp wrote

makes sense when you think from 2022 but the dates by themselves sound like random numbers haha

1

AgginSwaggin t1_irhf0m2 wrote

I created basically the same poll a few months ago, but the moderators removed it because it was "too similar to the yearly 'when will the singularity happen'" thread.

Please up-vote this comment (and the post), to send a message to the moderators not to delete this post either.

The poll adds valuable information on how the general consensus changes over time

38

TemetN t1_irjs4lh wrote

Honestly, that post should still be pinned. I enjoy reading updates/new thoughts on this. I could swear there was a discussion a while ago on doing more such threads in fact.

6

DEATH_STAR_EXTRACTOR t1_irnyo6p wrote

Yes good idea to see it change over time. We need to revote. Also save these stats in case they get deleted.

3

TemetN t1_irgglyl wrote

In terms of weak AGI (broadly meeting human level on benchmarks) by 2025. I think people tend to either underestimate progress in this area, or consider AGI from a different perspective than simply broad human level performance.

27

DungeonsAndDradis t1_irghzsf wrote

That's my guess as well. Just with the rapid advancements this year alone. Gato and Palm and Lambda are crazy.

20

TemetN t1_irgisen wrote

Yes, that and the dataset misses (most notably with MATH) make me think this is going to surprise even people tracking the field.

​

Honestly we're due for another major LLM drop soon, it's easy to get lost in all the other stuff, but it's mostly been focused elsewhere.

17

SejaGentil t1_irirwko wrote

what are those?

5

sideways t1_irl39bo wrote

PaLM's logical reasoning really blew my mind. That, more than anything, convinced me that we are close to AGI.

3

sumane12 t1_irgazhf wrote

Depends on how you define AGI, very primitive agi by 2030, human level AGI by 2035

22

DungeonsAndDradis t1_irit0yi wrote

I like the Coffee Test (from Steve Wozniak): (Number 2 on this page) https://analyticsindiamag.com/5-ways-to-test-whether-agi-has-truly-arrived/

Basically, put an ai-powered robot in a random house in America and instruct it to make a cup of coffee.

3

sumane12 t1_iriy7pl wrote

So I see a few problems with this, number one, some of the smartest people I know make terrible coffee. Number 2 I'm sure some people with really low intelligence can make great coffee, I can also imagine a closed system narrow AI to be trained on enough data to complete this task with no general intelligence, fun fact, I just asked gpt2 to describe the steps in making a cup of coffee and it was extremely close, (apart from boiling the water) so much so that gpt3 would have no issue with it I'm guessing. Add some image recognition, some motor function and I'm pretty sure a few current AIs could accomplish this in 99% of situations.

2

DungeonsAndDradis t1_irj17jb wrote

The coffee doesn't have to be good.

The AI needs to be to a level that it can control a robot, not just write out text steps for making coffee.

The step we're missing is the general intelligence to control a robot and actually make the coffee.

3

SlowCrates t1_irjfqcy wrote

I'm less intimated putting an alternator in a car than I am by a coffee machine, because I hate coffee and don't drink it.

1

TopicRepulsive7936 t1_irgl3aw wrote

AGI is has one (1) exact definition. Sorry.

−15

sumane12 t1_irgqb5a wrote

Yea but that's not true tho.

9

TopicRepulsive7936 t1_irgsogq wrote

Explain. How many definitions have you come across.

1

sumane12 t1_irhve4c wrote

Intelligence of smartest human.

Intelligence of dumbest human.

Intelligence of average human.

Human intelligence and sentient.

Human intelligence and not sentient.

Generalise from one task to a second.

Generalise from one task to multiple tasks.

Generalise from one task to every task achievable to a human.

Generalise from one task to every task achievable by every human.

The 'G' in AGI stands for general meaning any AI that is able to generalise skills from one to another, eg being trained on go, and transfering those skills to chess. That is the simplest definition of AGI.

10

subdep t1_irhi2u5 wrote

AGI == Human level intelligence

The problem is that humans have a wide spectrum of intelligence. I believe you were making the first point, and the other person was trying to make the other point.

I think that we can all agree that regardless of AGI intelligence levels, once we get something smarter than any single human that’s ever existed, then we are in ASI territory.

3

TheSingulatarian t1_iri68vt wrote

I think there are at least 4 levels of AGI.

AGI on the level of a child or a slow human.

AGI at IQ 100 average human intelligence

AGI ate 120 IQ above average human intelligence

AGI at 140 to 170 IQ genius level human intellingence

The fifth level ASI far beyond a level of intelligence that humans are capable of achieving.

1

NativeEuropeas t1_irggi7w wrote

People are too optimistic here. It's not going to be sooner than 2040.

16

Kinexity t1_irgx4hb wrote

The hopium is being consumed here at high rate.

8

Effective-Dig8734 t1_irglpt7 wrote

What do you think agi is

5

NativeEuropeas t1_irgm0ih wrote

AI that can develop sentience

3

Effective-Dig8734 t1_irgm3ge wrote

And sentience is?

5

NativeEuropeas t1_irgmlpe wrote

What's your point, mate

−17

Effective-Dig8734 t1_irgmp0h wrote

That you’re saying people are overly optimistic about the arrival date of agi but you do not seem to be especially informed on what agi even is

12

NativeEuropeas t1_irgmz3q wrote

That's not an argument. That's just ad hominem.

If I'm mistaken, then please explain how and why I am wrong instead of attacking my knowledge without any substance and without saying anything meaningful.

−1

Hotchillipeppa t1_irgntio wrote

Saying "you do not seem to be especially informed" after giving a non-answers is quite possibly the nicest way they could've put that.

16

NativeEuropeas t1_irgoigl wrote

Nicest way to put that any attempt for a discussion with him is a waste of time? Yeah, I guess you're right...

−1

ihopeimnotdoomed t1_irhfvh8 wrote

That's just how you took it. Sentience doesn't necessarily have anything to do with AGI.

10

dilznup t1_irhw3u4 wrote

Oh my god dude, you make an assumption and then are unable to develop your perspective. That's anti-discussion.

6

NativeEuropeas t1_irhwsd7 wrote

I'm not here to stroke my ego, I don't need to win every internet talk, and I'll be gladly proven wrong.

The dude who replied above didn't really engaged in a discussion, didn't ask me "why do you think so?" , instead went in with stupid questions that didn't lead nowhere.

0

dilznup t1_irhwwbz wrote

He asked for your definition of sentience, that's a crucial point to understand your argument, and you're the one who refused to develop. You were not attacked at this point.

9

NativeEuropeas t1_irhx77e wrote

I understand that but what is the purpose of asking general definitions when the information is available at Wikipedia? My answer wouldn't really be that different from what is available to the general public.

0

dilznup t1_irhxktq wrote

There is an active debate about what sentience is, and depending on what you put in the word, it could explain why you see AGI happening later than most voters.

5

Ashamed-Asparagus-93 t1_irlxscw wrote

So we can see your perspective and better understand your opinion on the matter, that's the purpose

1

Smoke-away t1_irhdlgz wrote

AGI 🤖 2022

11

intergalacticskyline OP t1_irhdp0q wrote

I like how you think! Which company do you think is gonna make it happen? How confident are you that it's gonna happen?

4

Smoke-away t1_irhfjfp wrote

I always thought it would be DeepMind, but OpenAI is getting close.

The big wildcard comes from the open-source movement led by StabilityAI (known for Stable Diffusion).

The amount of projects that have spun off from Stable Diffusion is enormous. They far outweigh the impact/reach of DALL-E 2 by OpenAI. I could see a similar thing happening with the next big large language models, like GPT-4. You could imagine a scenario where OpenAI releases GPT-4, then StabilityAI or a similar organization releases an open-source version a while later, and then the community builds a large number of projects on top of that. In this scenario one of the leaders could release a pre-AGI model and a competitor, or even an individual, would use this momentum to go beyond, if that makes any sense.

As John Carmack said on the Lex Friedman Podcast:

> It is likely that the code for artificial general intelligence is going to be tens of thousands of lines of code, not millions of lines of code. This is code that conceivably one individual could write.

As we get closer to AGI, companies will be incentivized to keep their best models private for as long as possible so they don't get leapfrogged upon release. Others take the opposite approach to try and keep these models as open and widely available as possible to try and avoid a winner-takes-all scenario.

8

MercuriusExMachina t1_irjxtx5 wrote

I agree with most of what you say, but please note that one key difference between diffusion models and language models is size, so compute costs. Diffusion models are really tiny compared to language models.

3

SlowCrates t1_irjfcfg wrote

The fact that there's an AI that can read a story and illustrate it on its own tells me it's closer than a lot of people think. That's suspiciously close to imagination.

Humans simulate a lot more than we generally want to believe. And we are constantly in a feedback loop, cross-referencing our view of ourselves to our view of the world -- making sure everything is the way it "should" be. I think that over the next 5-10 years studies will become uncomfortably revealing as to how machine-like we are, while increasingly advanced AI's begin to out-simulate us to the point that philosophically, we begin to panic to find the "us" in us.

10

Nadeja_ t1_iriomqy wrote

Artificial General Intelligence, as an agent that isn't trained on a single task and can generalize.

What follows is an optimistic scenario.

Early proto/sub-human AGI: now - "With a single set of weights, GATO can engage in dialogue, caption images, stack blocks with a real robot arm, outperform humans at playing Atari games, navigate in simulated 3D environments, follow instructions, and more". Not great yet (it may seem a jack of trades and master of none), but with an improved architecture and scaling up, the possible developments sound promising.

SH-AGI (sub-human): Q4 2023 to 2024 - as long as nucking won't happen, nor the next political delirium. The SH-AGI would be a considerable improvement compared to GATO and would be capable of discussing with you at LaMDA+ level about the good quality video that it is generating. At times it would feel even human and even sentient, but other times you would still facepalm in frustration and in fact memory and other issues and weaknesses won't be fully resolved yet; also (like the current models that draw weird hands) it would still do some weird things, not realizing they aren't making full sense.

HL-AGI (human-level) / Strong AI: around 2026 (but still apparently not really self-aware) developing to around 2030, when it would be a strong AI, possibly self-aware, conscious and also not just reacting to your input. Although qualitatively not super-human, but as smart as a smart human (and now fully aware of what hands are, how they move, what makes sense etc), quantitatively it would beat any human with the sheer processing power running 24/7 and trained more than any human could be in a multitude of lives, for any possible skill, and connecting all this knowledge and skills together, understanding and having ideas that no human could even imagine.

At that point hope that the alignment problem is solved enough and you aren't facing a manipulative HL-AI instead. This won't be just the values (you can't even "align" humans to values, rights, crimes, only broadly), but it would be an alignment to the core goals (that for the humanity, as well as any other species on the Earth is "survive"). The aligned HL-AGI would see her/him/them/itself as part of the humanity**, sharing the same goal of survival**. It that won't fully happen, good luck.

ASI (super-human): not too many years after. This would happen when the AI becomes also qualitatively superior to any human cognitive skill. Reverse engineering the human brain is a thing, but can you imagine *super-*human reasoning? You could probably, intuitively, guess that there is something smarter than how you can think, but if you can figure what is it, you are already that intelligent, therefore it's not super-intelligent. Do you see what I mean? As a human level-intelligence you can barely figure out how to engineer a human-level intelligence. To go above, you could think of an indirect trick, e.g. of scaling up the human-level brain or using genetic algorithms, hoping that something emerges by itself. However, since the HL-AGI would also be a master coder and a master engineer and with a top notch understanding of how the brain works, and the master of anything else... maybe it would be able to figure out a better trick.

Once there, you couldn't possibly understand anymore what the ASI thinks even if the ASI were trying to explain it to you, as you couldn't explain quantum computing to your hamster, and then it would be like explaining it to the ants, and then...

9

paulalesius t1_irj8n0n wrote

I'm going with 2025, the corporations that matter have succeeded so they cannot prevent the singularity now, it will all unravel fast.

But even if it's by 2030, the years will pass by quickly. We'll have to survive, guys, 2030 is soon enough.

7

Freds_Premium t1_irhn23r wrote

AGI is already here and is disguised.

4

phriot t1_irgl8y9 wrote

Well before 2100.

I really think somewhere around 2035 with a huge error bar towards the next century. I think the first thing we think is AGI will probably be an expert system that is just really good at making us think it's an AGI. Like, it probably wouldn't be too hard to train an AI on near-present speculative fiction to get it to learn the right things to say. Combine that model with a few others (MuZero comes to mind, one of the image creation models, etc.), and it would probably be pretty convincing, but not actually do anything of its own volition.

A system that has human-level competencies in enough areas and a convincing amount of free will might take some more time.

3

[deleted] t1_irhw9ef wrote

[deleted]

3

TheSingulatarian t1_iri6oxm wrote

I don't think intelligence is necessarily a factor of number of neurons. There are are species of birds that don't have big brains that show very good problem solving ability. It probably has something to do with the type of neurons and way they are connected that generates intelligence.

2

[deleted] t1_irixw8g wrote

[deleted]

1

TheSingulatarian t1_irj7qvo wrote

Exponential growth. A lot of the theorizing seems to be about reaching computers with a certain number of artificial neurons capable of X number of Peta flops and such. I'm not sure that size alone is going to deliver the desired result.

1

TrainquilOasis1423 t1_irj8zg9 wrote

Really it depends on your definition of AGI. There's a LOT of goal post moving going on in the last couple of years

3

Snap_Zoom t1_irqkkve wrote

Take my upvote.

As an armchair observer (is there any other kind?) - it seems that the goalposts have been moving for more than 30 years now.

The closer we get the further the goalposts will be moved.

1

OkFish383 t1_irk6fhb wrote

Wait for gpt 4, and than we will see what exponential growth mean. I think 2025 it could be already there.

3

Desperate_Donut8582 t1_irgef1i wrote

We don’t know AGI might be an entirely different thing than you are imagining people 100 years ago imagined the world as “Victorian retro futuristic” because that what they experienced but now it’s a totally different place that probably gonna be the same in the future

2

nihal_gazi t1_iri37yo wrote

As an AI researcher. I have been working on AI since the last 2 years, and have been working on developing AGI. It uses a thinking architecture similar to humans. And trust me, the algorithm is way simpler than the rest of the world. And I am expecting to complete it by this year. Moreover, it will be a ChatBot version of Human brain and I promise, it won't need any high power GPUs for training, because a basic mobile processor would work. And trust me, it's not going to be the stereotypical attention mechanism. It's different.

But right now, I cannot reveal the algorithm because I haven't told my parents about it for patenting.

2

intergalacticskyline OP t1_iri3cd9 wrote

This is actually pretty funny ha ha thanks for the laugh!

6

nihal_gazi t1_iri3j77 wrote

Welcome XD. But I was not joking really

2

intergalacticskyline OP t1_iri3mr8 wrote

No offence, but a 15 year old won't be creating AGI. I doubt you could even create a text to image model that's even half as good as Dalle or stable diffusion, good try though, thanks again for the laugh

4

nihal_gazi t1_iri3ycs wrote

Yeah my friends say the same. Nah i actually did make text to image models, but honestly, not like Dall e due to my computer's processor limitations. Currently because dall e has gained popularity it will be the talk of the present. But in terms of AI, I have made several programs, including an android AI Text to Text app that is 9900% faster than RNN based neural nets and puts GPT 1 out of competition for now

1

SlowCrates t1_irjhpih wrote

Young people learn way more efficiently because their brains are still developing. Our adult versions are still on whatever trajectory was set by our younger selves. If a child is passionate about this subject matter, by the time they're 15, they'll probably be extremely good at it. By the time they're 25, they'll be better than any adult who started as an adult.

Today's 15-year-olds live in an entirely different world. The opportunities to find interest in this subject are vast now. That wasn't the case 25 years ago.

I would put money on a 15 year old establishing AGI before you do it.

1

spreadlove5683 t1_iricoit wrote

I would say that I'd think we'd start pumping the brakes once we got close. The thing that worries me is arms race dynamics between countries making this possibly untrue.

2

intergalacticskyline OP t1_iricvfx wrote

It'll be China vs US imo, so there's no chance pumping the brakes will be an option

1

sideways t1_irl58ig wrote

I doubt most governments have the imagination to grasp just how profoundly powerful AGI will be.

They're focused on guns, bombs, oil and money. It's the Maginot Line all over again.

1

AsheyDS t1_irir87x wrote

In my own personal opinion, many of the components for an AGI of average human-level intelligence will likely be tested and functional by the end of the decade. Something publicly demonstrable likely by the early to mid 30's. From there it'll depend on how much testing is required, method of distribution, current laws, and more as to when it will be publicly available.

I think that we have the concepts down, but development of the software and hardware (two separately developed things) will take more time (maybe by the end of the decade), followed by extensive testing because it will be a very complex system. The processes may be simpler than one might assume, but a lot of data will still be involved, and obviously the dangers need to be mitigated. So even if the software and hardware capabilities converge, and the architecture 'works', it will still need to be tested A LOT. Not just for dangers, but even just making sure it doesn't hit a snag and fall right apart... So even if we as a species technically have it developed in the next 10-15 years, it may take longer to get into people's hands. The good news is, I think it's virtually a 100% guarantee that it will happen, and sooner rather than later, and it will be widely available and I think multiple people/companies/organizations will develop it in different but viable ways. After that it'll be up to the people whether or not they believe any of it. No definition will satisfy everyone, so there will always be those that deny it even when it's here and they're using it.

2

ArthurTMurray t1_irgajde wrote

We have primitive AGI in the form of AI Minds.

1

fellow_utopian t1_irgxbfk wrote

There's still a long way to go before we have AGI. Today's systems aren't even close, and most of the research and funding is going towards fairly narrow domains like image generation. We'll be lucky to have it by 2050 imo.

1

AgginSwaggin t1_irhfadl wrote

5 years ago, AI systems weren't even the tiniest bit close to what we have now. Don't underestimate the power of exponential growth

5

fellow_utopian t1_irhxsmi wrote

Honestly, I found Watson from over a decade ago in 2011 to be more impressive than recent language models like GPT-3.

AGI is a very different beast to what most researchers and tech companies are working on and progress in many relevant areas definitely hasn't been exponential.

2

umotex12 t1_irh0u3v wrote

A program that will consist of multiple AIs and work like human? Fast.

Something that is actual AGI as one program? Not really. Honestly humans are humans because of lots of lots of things. We dont even know exactly what depression drugs are doing to our brains. So how we can simulate this perfectly in a computer?

1

phriot t1_irh9g9o wrote

>So how we can simulate this perfectly in a computer?

There is the possibility that general intelligence doesn't have to come in the form of a brain evolved on Earth. That is, we could get to general intelligence without simulating a known brain. That's absolutely the most straightforward way to go about it, though, if AGI doesn't emerge other before we get the knowledge and processing power for such a simulation.

8

_gr4m_ t1_irhyehy wrote

I totally agree. Its like we have machines that can lift almost unlimited weights, yet are much simpler to build than human muscle.

I think the brains complexity is more a product from evolution and biological constraints than it really has to be that complex. In fact, it wouldn't suprise me if simpler is better.

3

IDUser13 t1_irhoxgd wrote

Cracking the hard problem might not be doable.

1

red75prime t1_irisvnz wrote

The hard problem of consciousness has nothing to do with AGI. The problem is about an externally unobservable phenomenon (consciousness). What we need from AGI (general problem solving ability) is perfectly observable.

2

Excellent-Hope-1514 t1_irj2bi8 wrote

2050 or later. You guys are not in the field (no offense) and don’t understand how far we are from AGI.

1

Snap_Zoom t1_irqin83 wrote

We will have AGI before we have consensus that it exists.

The AGI will announce itself and various humans will acknowledge it’s presence - all the while many will claim it has not passed this or that test.

It will be highly controversial long after the AGI has accelerated it’s knowledge base well past any human, or even group of humans.

And by then…

1

AgginSwaggin t1_isjq2v2 wrote

I can't believe it! They removed the post!

1

Future_Believer t1_irglxwd wrote

Why?

Seriously, you are asking for an opinion but not for qualifications or reasoning or educational level or educational focus or applicable hobbies. What will you do with the answers?

I consider the promulgation of usable information to be a reasonable thing and quite possibly, a positive thing. OTOH, the sharing of opinions without regard to foundation is IMNSHO unlikely to produce any positives.

I do not think you should base your planning for the future on my opinions. You may wish to consider Michio or Ray or some of the others whose life focus has been studies of the probable future but, I fail to see the utility of a collection of random internet opinions.

However, I am willing to listen.

−7

intergalacticskyline OP t1_irgm4q3 wrote

I'm just trying to get a consensus of this community based off of my own curiosity, I'm only using this info for entertainment purposes, nothing else. Just trying to see everyone's opinions

8