Submitted by blueSGL t3_10gg9d8 in singularity

Way back in the mists of time, (just over a month ago)

We were informed that Metaculus community prediction for "Date Weakly General AI is Publicly Known" has now dropped to a record low of Aug 26, 2027

At the time my top voted comment was dismissive.

> "If the sign up on metaculus is driven by 'AI optimistic' podcasts/sites advertising its existence it will naturally trend lower due to a self selecting cohort of new users signing up being more optimistic about AGI."

However I was WRONG.

Twitter user @tenthkrige who works at metaculus had already run the numbers.

https://twitter.com/tenthkrige/status/1527321256835821570

TL;DR users who had previously predicted the AGI timeline and subsequently updated, did so in lock step with new users.

In summery* it is not that overly optimistic people are joining and goosing the numbers, everyone is trending in that direction.

63

Comments

You must log in or register to comment.

zero_for_effort t1_j52ko8q wrote

Very interesting. I keep waiting for the prediction timer to stall but it just keeps jumping closer to the present. I can't wait to see which direction it moves when GPT4 is released.

17

Thatingles t1_j52nhox wrote

Here's the thing. With the possibility being so close, you could argue the date is now being more heavily influenced by the pessimists. If the prediction date is 2027 that only gives the optimists 5 years to play with, but the pessimists can go as far out the other side as they want.

In a sense it doesn't matter because it will happen when it happens and the prediction date is like reading tea leaves - a thing you look at to distract yourself whilst you come up with a forecast.

It is worth remembering that technology moves at the rate of the fastest. Everyone else has to catch up to the new point and restart from there. What I'm trying to say is that predicted dates reflect what each individual knows but actual dates will reflect only what the fastest groups achieve. If Bob predicts 2035 based on his knowledge, but doesn't know that Sue has already achieved (but not published) several of the steps on his timeline, Bob's prediction is worthless. We obviously don't know ahead of time who falls into which category, all we can say for sure is that the pessimists are more likely to be caught out.

29

DungeonsAndDradis t1_j52px80 wrote

Well...<pushes glasses up into firing position>

  1. Kurzweil's main shtick is The Law of Accelerating Returns. Basically, technological advances are coming more and more quickly. For example, it took Humanity like 200,000 years to develop the steam engine, and then 200 more to go to the moon.

  2. 2022 was a ballistic year for AI advances, from nearly every company that is researching it. PaLM, Lambda, Gato, DALLE-2, ChatGPT. These tools are revolutionary advances in AI.

  3. Following the Law of Accelerating Returns, 2023 should be major leaps in AI, and then again in 2024, and by 2025 things should be bonkers.

My layman's guesstimate is that the next major architectural design is going to happen this year. Much like transformers accelerated AI research in 2017. One or two more major architecture pivots leads us to AGI.

It's only going to get weird from here!

24

AsuhoChinami t1_j52x8ly wrote

Weird, some super aggressive, inflammatory guy outright called me a delusional idiot for not believing AGI to take until 2050-2065 (which is, in his words, the consensus amongst almost all AI experts).

28

icedrift t1_j534dwy wrote

Does metaculus only poll people in the field and verify credentials or can anybody submit an estimate? If it's the latter, why take any stock in it? AI seems like one of those things that attracts a lot of fanatics who don't know what they're talking about.

Polls of industry veterans tend to hover around a 40% change of AGI by 2035

4

SoylentRox t1_j534guy wrote

If you wanted to dismiss metaclus, you would argue that since it's not a real money betting market, operating for a long period of time, it's not going to work that well. Real money means that people are going to only vote when they are confident, and the long timespan means that losers lose money, and winners gain money, and over time this gives the winners larger "votes" because they can bet more money.

Over an infinite timespan, the winners become the only bettors.

As for AGI in 2027: sure. I mean it's like predicting the first crossing of the atlantic by plane after planes are flying around all over shorter distances. It's obviously possible.

6

icedrift t1_j535agx wrote

He's not wrong... In a 2017 survey distributed among AI veterans only 50% think a true AGI will arrive before 2050 https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/

I'd be interested in a more recent poll but this was the most up to date that I could find.

EDIT: Found this from last year https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022

Looks like predictions haven't changed all that much, but there's still a wide range. Nobody really knows that's certain.

10

AsuhoChinami t1_j5360dz wrote

And the half that agrees with you counts more than the half that doesn't because reasons? I'm a delusional idiot for sharing the same opinion as a tiny, miniscule, insignificant, irrelevant, vanishingly small, barely even existent 50 percent demographic?

8

icedrift t1_j537xjq wrote

I'm inclined to trust the people actually building AI. 50% or experts agreeing AGI is likely in the next 30 years is still pretty insane. Personally I think a lot of the AI by 2030 folks are delusional.

5

blueSGL OP t1_j53btzc wrote

You might find this section of an interview with Ajeya Cotra (of biological anchors for forecasting AI timelines fame)

Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754

Where she talks about how several benchmarks were past early last year that surveys of ML workers had a median of 2026.
Also she casts doubt on people that are working in the field but are not working on specifically forecasting AGI/TAI directly as a source for useful information.

16

Borrowedshorts t1_j53ksqo wrote

There are two types of AI experts. Those who focus their efforts on a very narrow subdomain and then there are those who study the problem from a broader lens. The latter group who are AGI experts and who have actually studied the problem tend to be very optimistic on timelines. I'd trust the opinion of those who have actually studied the problem vs those who haven't. There are numerous examples of experts in narrow subdomains being wrong or just completely overshadowed by changes they could not see.

12

Borrowedshorts t1_j53lrlq wrote

The world has never seen anything like AI progress. AI capability has been advancing at nearly an order of magnitude improvement each year. It's completely unprecedented in human history. I think it's much more absurd to have such confidence AI progress will cease for no particular reason, which is what will have to happen if the post-2050 predictions are correct.

9

Nervous-Newt848 t1_j53rk87 wrote

OpenAI isn't the only company working on AGI... There are other companies and governments working on this especially China

5

blueSGL OP t1_j53yvki wrote

> but it just keeps jumping closer to the present.

Connor Leahy described this as a "pro gamer move"

> "If you see a probability distribution only ever update in one direction, just do the whole update instead of waiting for the predictable evidence to come, just update all the way bro."

Kinda looks like Kurzweil's "Law of Accelerating Returns"

3

SurroundSwimming3494 t1_j54661j wrote

My guess is that most AI researchers are pretty familiar with AI beyond narrow cases, so I think most of them are qualified to give an answer to "will AGI ever arrive, and if so, when?".

Also, I get the sense that a lot of the AGI crowd knowingly engage in hype to get more publicity, and it makes sense. "AGI soon" is a lot sexier of a discussion to touch on on a podcast (for example) as opposed to "AGI far away".

0

No_Ninja3309_NoNoYes t1_j5493t8 wrote

Weakly general sounds strange to me. Sounds like almost human. I think we need some sort of minimal requirements otherwise we might be talking about different things.

I think AGI has to at minimum:

  • be multimodal
  • Embodied
  • Know how to learn
  • Able to follow a chain of arguments
  • Able to communicate with autonomy
  • Understand ethical principles

And there are many other things, but these seem hard enough. I think the first two are doable by 2027. Not so sure about the others.

I know how people love to talk about exponential growth. But let's not forget that something has to drive it. Deep learning has been driven by GPUs and the abundance of data. Both are not inexhaustible resources.

3

Sea-Cake7470 t1_j54bowa wrote

Here me out.... It's Probably this yr or the nest to max... No further then that....

0

Ortus14 t1_j54byn2 wrote

It has always been the case that people working within a field over-estimate how long it will take to achieve things within that field. They are hyper focused on their tiny part and miss the big picture.

To make accurate predictions you need to use data, trendlines and growth curves. It literally doesn't matter how many "experts" are surveyed, the facts remain the facts.

A few people making data and trendline based predictions hold far more weight than an infinite number of "experts" that base their predictions on anything other than trendlines.

5

No_Airline_1790 t1_j54haff wrote

2030 was the date given to me from a source in early 2022 but then in mid 2022 (July 5th) I was told by the source that something happened unexpectedly that jumped the timeline to 2027. So I am led to agree

4

dasnihil t1_j55450p wrote

i just don't get the idea of counting days, are you guys like depressed or something? what do you think will happen the day, let's say nvidia announces that they have achieved neural network to run on a neuromorphic hardware in a very optimal way.

big announcement but we'll all forget about it in a couple of days :)

after that it's a game of implementation and industrialization. how can we make our industries more powerful and take this human enterprise on a next level. i doubt that the leaders and capitalists would have any desire for a utopian society with shared resources and harmony. that kind of ask will take at least a 100 years to be implemented on our society. this is a big change.

i personally don't expect to see much significant changes in my lifetime where i'll get a $500/mo check from some AI Labor Law Allowance. maybe in the coming generations if we play our cards right and don't wipe out all lives and any hopes for artificial life/intelligence.

1

Borrowedshorts t1_j55qvtl wrote

I don't think they are honestly. They may know some of the intracacies and difficulties of their specific problem and then project that it will be that difficult to make progress in other subdomains. Which is probably true, but they also tend to underestimate the efforts other groups are putting in and the progress that can happen in other subdomains, which isn't always linear. So imo, they aren't really qualified to give an accurate prediction because very few have actually even studied the problem. I'd trust the people who have actually studied the problem, these are AGI experts and tend to be much more optimistic than the AI field overall.

3

AsheyDS t1_j55s0h2 wrote

>AGI experts

No such thing yet since AGI doesn't exist. Even when it does, there are still going to be many more paths to AGI in my opinion, so it may be quite a while before anyone can be considered an expert in AGI. Even the term is new and lacks a solid definition.

1

AsheyDS t1_j55tl03 wrote

>My layman's guesstimate is that the next major architectural design is going to happen this year.

You may be right, but a design is speculative until it can be built and tested, and that will take some time.

2

DungeonsAndDradis t1_j56aqfa wrote

I believe the architectural changes have already been made, perhaps last year, and they are currently being tested. I believe we'll see the finished paper(s) announcing one or more breakthroughs this year.

2

RabidHexley t1_j56s60f wrote

> that kind of ask will take at least a 100 years to be implemented on our society. this is a big change.

I personally have come around to the thought that something like UBI being implemented due to automation won't be from compassionate, socialist ideals, but simply because it will become necessary for capitalism to continue functioning.

Reaching a point where you can produce arbitrary amounts of goods without needing to pay nearly anyone across numerous economic sectors is a recipe for rapid deflation. UBI would become one of the only practical methods of keeping the wheels turning and the money flowing.

Maybe after years of it being the norm it would lead to a cultural shift towards some sort of a properly egalitarian society, but it would start because hyper-efficiency resulting in economic collapse isn't good for anyone including the wealthy.

2