Submitted by Phoenix5869 t3_xu9ra8 in singularity

according to aicountdown we’re going to have agi in June 2028…

i know there's been a lot of progress in ai and I'm not saying it won't happen or anything, but agi in less than 6 years seems a little far fetched

1

Comments

You must log in or register to comment.

Sashinii t1_iqujv7r wrote

No one - not me, Ray Kurzweil, Sam Altman, Demis Hassabis, etc. - can say when AGI will be developed with certitude, but 2028 is a realistic prediction.

32

manOnPavementWaving t1_iqutco3 wrote

Realistic being a relative term here, Im more on Sam Altman's side who said not too long ago it'll be about 7 years before AI can give a good TED talk completely on its own

6

albions_buht-mnch t1_iqujldv wrote

Probably over-optimistic.

Optimism is based though.

9

Lawjarp2 t1_iquq198 wrote

Oh yeah, there's a premium version with accuracy upto the nanosecond

7

zero_for_effort t1_iqujp94 wrote

Who would you trust to reliably confirm or deny that it is accurate? You can read about how they adjust their countdown on their website (follow the link under the countdown timer). My understanding is averaging the predictions of a group tends to produce more reliable predictions generally but I'm not sure if this has been demonstrated regarding futurolgy generally and future tech more specifically.

5

arindale t1_iqwduom wrote

You have to look into the source data. Specifically, it links to Metaculus's "Date Weakly General AI is publicly known" survey. They further define this below (bolded text). I provided some additional definitions in [] parenthesis with no bold.

"For these purposes we will thus define "AI system" as a single unified software system that can satisfy the following criteria, all easily completable by a typical college-educated human.

Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize. [The "silver" prize is offered for the first chatterbot that judges cannot distinguish from a real human and which can convince judges that the human is the computer program.]

Able to score 90% or more on a robust version of the Winograd Schema Challenge, e.g. the "Winogrande" challenge or comparable data set for which human performance is at 90+% [a benchmark for commonsense reasoning, is a set of 273 expert-crafted pronoun resolution problems originally designed to be unsolvable for statistical models. Recent advances in neural language models have already reached around 90% accuracy on variants of WSC. Per Cornell University, this problem was solved by 2019, but note that they were ANI.]

Be able to score 75th percentile (as compared to the corresponding year's human students; this was a score of 600 in 2016) on all the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages and having less than ten SAT exams as part of the training data. (Training on other corpuses of math problems is fair game as long as they are arguably distinct from SAT exams.) [ I believe this has been solved as early as 2015 but may have cheated using previous versions of SAT tests. But more recent work suggests that an AI can solve university-level math problems which would be harder. This link provided is one of many different news articles. I perceive this problem as likely solved.

Be able to learn the classic Atari game "Montezuma's revenge" (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play (see closely-related question.)" [Montezuma's Revenge was solved in 2018 by Uber. I am unsure whether it met the threshold of 100 hours of real-time play, but there are other models that have been released since Uber's paper and one may have already met this threshold.]

​

Now, personally, I think that this specific set of questions is insufficiently broad for a weak AGI. But I admit that everyone has a different definition of weak AGI and Metaculus at least provided a precise definition that can be measured against. Given this definition, I think it's somewhat possible for a single to meet all of the criteria in 2022 or 2023. Two notable challenges remain.

  1. To make a truly remarkable chatbot that is indistinguishable from humans. There are some serious contenders, but I would argue that this is not ready quite yet.

  2. To create a SINGLE AI model that can do ALL of these tasks.

Will we see a single AI model in 2023 fit all of these criteria? I have high hopes for the Gato 2 scale-up but who knows at this point.

4

sumane12 t1_iqx79hi wrote

The fact that we are now questioning the definition of AGI should tell you all you need to know. We have advanced to the point that a few years ago, people would have been convinced AGI has been achieved given its current capabilities.

I think AI and human intelligence (HI) is different, AI has not had to suffer 4 billion years of natural selection in a predator/prey environment, its goals (in my humble opinion) will never be comparable to our goals, it might not even be able to have goals that are not dictated to it by us (much like our goals are dictated by natural selection). While those differences remain, people will still not be convinced AI has been achieved (even if all of its capabilities surpass HI).

Personally I think my perspective of AGI will be achieved by 2028, and that perspective is a chat bot that can have an engaging human level conversation, can carry out basic requests, and can fully function as a worker in 90% of jobs. But hey, that's just my opinion 🙂

4

Deformero t1_iquk5h5 wrote

How can it be accurate? Based on what? Nonsense.

3

NTaya t1_iqvdwgb wrote

Considering the countdown links to the weak AGI Metaculus question, I'd say it's around five years too late, lol. What's described in the question is achievable in under a year, but it's not AGI.

2