Comments

You must log in or register to comment.

Affectionate-Food912 t1_j3owp8d wrote

I think if you look at the average prediction of all experts, it will be somewhere in 2050. Their predictions are our best bet. 2023-2029 is just not gonna happen.

−4

tatleoat t1_j3p4ldo wrote

I think you'd get more interesting answers if you split 2023-2029 into two or three options

26

Gmroo t1_j3poqke wrote

Most certainly well before 2030. And I say this as someone who thinks AGI requires paradigm shifts. I already see signs of the sheer economic and intellectual force behind AI now digging into actions that likely lead to paradigm shifts. There is pressure and incentive to lower compute, apply the newest techniques across countless domains, innovate hardware, explore embodiment, etc.

This year we'll already get what I call pseudo-AGI. LLM-based Narrow AI that is general enough to be phenomenally useful when coupled with handy APIs and other modern techniques in AI.

26

summertime_taco t1_j3pr39m wrote

This sub is completely delusional as to the realistic likelihood of the near-term onset of a singularity.

10

dlrace t1_j3ptf2g wrote

What is the geometric mean of those results?

2

Cuidads t1_j3puy02 wrote

It depends if it's even possible with the current form of neural networks. If it's not possible then it's probably going to take some time since most research is on neural networks, while different algorithms or hybrids would be needed. There are two camps, those who think neural networks can generalize with sufficient amount of data, and those who think they're just advanced polynomial curve fitting algorithms that will never be able to truly generalize (or at least not sufficiently enough) and are mostly just doing naive interpolation. Here's an example of a guy taking the "they don't generalize" to its full extent: https://nikhilroxtomar.medium.com/why-deep-learning-is-not-artificial-general-intelligence-agi-e9667d147134

10

Gaudrix t1_j3q6b6k wrote

I think within 10 years we'll have something that at least makes us ask the question 🤔 "Is it conscious?" It might not be full AGI but proto-intelligence, about 80% there but not quite fully realized.

1

besnom t1_j3qfqm0 wrote

You should add a “check the results” option so people don’t click on a random date

4

No_Ninja3309_NoNoYes t1_j3qlbw5 wrote

I think that Deep Learning is a bubble that will burst in five years. The space of possible combinations of parameter values in neural networks blows up exponentially. Humans learn on the fly. We sort of make our own rules. We learn from experience. Neural networks can only associate words or the equivalent in images. They don't actually know what words or images mean. They are blind and deaf strangers in a strange land.

5

priscilla_halfbreed t1_j3qssj8 wrote

Going from 1990 where home computers were barely emerging in the world to 20 years later where smartphones were just emerging, if we compress this 20 year span into 10 due to exponential growth, then I think 10 years from 2020 we will see a similar leap, so around 2030

2

NarrowTea t1_j3rcjvz wrote

by 2030 but they won't call it agi since we won't get an ai controlling a robot body making coffee for people like in the coffee test and it will be so good it won't be able to pass the turing test without dumbing itself down.

1

will-succ-4-guac t1_j3rn1c9 wrote

That seems like an exceptionally broad definition of “pseudo-AGI” that by most people’s definitions of “phenomenally useful”, already exists and has for some time. ChatGPT is already phenomenally useful in daily life as it is.

2

will-succ-4-guac t1_j3rndhp wrote

> Neural networks can only associate words or the equivalent in images. They don't actually know what words or images mean.

Well this begs the question of what it means to “actually know what words or images mean”.

I would posit that we don’t actually know that we are any more advanced than this.

You see an image of a computer and you know what the computer is and what it means. But that’s because your brain matches the image of the computer to computers you’ve already used and your experiences with those computers. What is different about neural networks?

3

Halperwire t1_j3rp6if wrote

It really depends on what people define AGI as. Most of Kurzweils predictions are somewhat true but make you say... "well that's not exactly what I had in mind but technically you could argue he was correct."

1

Equivalent-Ice-7274 t1_j3rpeec wrote

Bro, I am an OG armchair futurist from back in the original KurzweilAI forum days, we saw tons of articles from physicists, scientists, engineers, programmers, philosophers, and other experts. I don’t have the time to research that stuff, as I am focusing on researching how to survive, and live a prosperous life in the coming age of AGI.

2

will-succ-4-guac t1_j3rplii wrote

If you go that far back in the AI world you are certainly aware of selection bias, recall bias and response bias, all of which influence what you just said. So it’s not that the “majority” of experts said it would take hundreds of years.. It’s the majority of experts peopel felt like talking about on a forum you were a part of, that you happen to remember, and that people happened to respond to.

1

joecunningham85 t1_j3rqtbf wrote

r/singularity is delusional??? I thought this was the cutting edge of scientific knowledge and rational thinking populated by all of the world's leading AI researchers. It's the rest of the world that is delusional. Most people still suffer under the delusion that having kids, making investments, learning, falling in love, and enjoying their life has any meaning or value.

Much better idea to just get by day to day until the benevolent AGI overlord comes and solves every one of our problems, gives us UBI, uploads you into the mainframe, and lets you fuck as many androids as you want and travel across the galaxy for all eternity.

/s

6

NefariousNaz t1_j3s0r78 wrote

At this point we're looking like we're getting really close. Easily can happen in the 2030s. Sooner maybe, but it wouldn't be labeled as a AGI by the mainstream.

1

Future_Believer t1_j3s0sgh wrote

I am a people and I don't love these polls. However, this one did generate a query within me that I am willing to externalize and listen to the thoughts of others.

I am not currently a programmer nor have I done any actual programming in a couple or three decades. I am not in an industry that stands to benefit or suffer from AGI's realization. I do pay attention to tech and science trends but only as a casual observer.

So why should anyone care what timeframe I believe is the right one for the onset of actual AGI? Is there an action that will be taken should we reach consensus? What will change if the poll goes all one way or the other?

2

EOE97 t1_j3t4phg wrote

I'm off the opinion AGI arrives late 2020s - 2030s

I don't think we will have to discover/invent something radically different to get to AGI. It's most likely going to come from things like;

  • massive scaling (10s of trillions of parameters, 1000x more data than today's top model )
  • making them more multimodal (can work with and understand audio, video, images, text, interact with software, control hardware parts etc.)
  • improving Neural Network technology (improving AI's step by step reasoning, faster learning and shorter training times, improving memory etc.)
  • optimising hardware (ASICCs for AI, neuromorphic computing etc)

Basically, exponentially scaling AI will lead to AGI within a decade. Scaling is the "missing piece".

1