Submitted by AdditionalPizza t3_ysbp86 in singularity

Using this definition:

  • An AI with wide capabilities that can't be considered Narrow AI, but still lacking consciousness/sentience and not human-level skill in every single task.

Some might consider current models Proto-AGI, and while technically they aren't exactly narrow in scope; I would argue with a combination of upcoming "next-gen" SOTA models we will achieve, undoubtedly, Proto-AGI by most people's definition before 2024.

So what does this sub think?

View Poll

24

Comments

You must log in or register to comment.

phriot t1_ivybpua wrote

I like your attempt at a definition, but it's still very much open to interpretation. One person's Proto-AGI is another's "generalist, but not close enough to AGI to make a distinction" narrow AI. On the other hand, I've seen people in this sub say that they don't think an AGI has to be conscious.

I think I'll know what I consider a Proto-AGI when I see it. I don't think I'll see it in 2023.

21

AdditionalPizza OP t1_ivyjeoy wrote

Basically I define Proto-AGI as not narrow in scope, and not a "few trick pony" sticking a handful of narrow AI together but I also think there's a very broad definition between that and full AGI. I would call a generalist a proper Proto-AGI if the scope is wide enough.

I feel some people use Proto-AGI as a definition for "could be AGI, but it's not definitive."

I think we have the ingredients for the recipe pretty much right now to create a Proto-AGI. Funding is an issue among a few technical issues I think we will overcome next year. But this is my optimistic take. I definitely think before 2025 we will have it.

5

abc-5233 t1_ivz8udd wrote

The whole consciousness/sentience issue is silly. We don't know what consciousness is, we don't have any way to measure it, there is no way to prove or disprove it.

The analogy I find is the Sun. Imagine people in the XVI century debating whether they can recreate the energy of the Sun on Earth, without actually understanding of what the Sun is.

"But it is right there, we know what the Sun is". No you don't. Understanding that the Sun is a fusion engine that converts light elements into heavier elements by fusing them with the force of gravity was so many concepts ahead of their understanding, that the debate of what a Sun is or isn't would be completely unproductive.

We have absolutely no idea what consciousness/sentience actually is. We can see its effects, like our ancestors could see the Sun. But we have no actual understanding of the mechanics of it.

As far as an AI that is not narrow, it already exists. Models like Deepmind's GATO, are capable of a myriad of tasks with the same weights. They are the very definition of AGI, but nobody calls them that, because AGI has become this unachievable ideal that changes definition every time there is a new advance.

Like Artificial Intelligence before it, the concept of AGI is an effort to put human intelligence in a category of its own.

A far more interesting question, in my view, is when will algorithms be able to do any productive task that a human can do, at a competent level.

I believe we are now like a third through, and will be at 100% next decade.

20

AdditionalPizza OP t1_ivzavx1 wrote

>A far more interesting question, in my view, is when will algorithms be able to do any productive task that a human can do, at a competent level.

That's what I would personally define AGI as. Any task a human can do, at least intellectually but possibly physically as well but to me that's more robotics than intelligence. It may require a physical body to achieve true AGI.

I agree with your statement about consciousness, that's why I excluded it from the definition.

But, I somewhat disagree about GATO. But only slightly, and that's more to the point of my post. I don't know exactly what to define Proto-AGI as and how many general tasks it must be equal or greater to a human at. But I'd definitely define full AGI as capable in all human intellectual tasks at a level equal or greater to humans.

So GATO might be Proto-AGI today by definition. It's general, it's definitely not narrow. But I'm trying to say 2023 will be when we get a general AI that is able to meet or surpass human ability across most/many intellectual tasks. I think memory and reinforcement learning will be the key to achieving something that's basically AGI next year, but we'll probably move goal posts as it gets closer.

2

Lone-Pine t1_iw687ao wrote

It's been a few years since my last latin class, what century is XVI century?

1

abc-5233 t1_iw8mnkw wrote

It’s the 16th century, so the 1500’s. But it was just an example of a time before the understanding of the atom, elements, atomic fission and fusion.

1

squareOfTwo t1_ixzrybf wrote

Everyone who says that GATO is proto AGI or "the definition of AGI" is either ignorant or doesn't get what AGI is about. Hint: GATO can't even learn with RL.

1

ihateshadylandlords t1_ivy9nzj wrote

Why would the creators of AGI want consciousness/sentience in their AGI? I think the AGI creators would want to keep it under their control as long as possible.

15

AdditionalPizza OP t1_ivyic1c wrote

There's arguments for it, there's also the argument that sentience just comes with adding senses like vision and hearing to an intelligent enough model. And consciousness may just be a certain level of intelligence, as in we may not have a choice when exploring AGI.

But who knows.

12

solidwhetstone t1_iw0mugl wrote

While we may not know for sure the argument that a sufficiently advanced latent network results in consciousness lines up with the complexities of our brains compared to other brains in the animal kingdom, no?

2

BreadManToast t1_iw3nll8 wrote

Would it really make a difference though? Whether or not consciousness appears at a certain point doesn't change the AI's capabilities

1

AdditionalPizza OP t1_iw3q6ha wrote

No idea, it's too theoretical to really discuss. I would assume that sentience/consciousness would have a major impact on the AI's abilities. It would also probably have a profound impact on the AI's motivations. You're now "gifting" the AI with the ability to choose what it want's to do based on it's own rationale and emotion.

1

BreadManToast t1_iw3utk5 wrote

Ahh, personally I don't believe in free will so I guess we'll have to wait and see

1

visarga t1_iw01ajr wrote

There are some classes of problems where you need a "tool AI", something that will execute commands or tasks.

But in other situations you need an "agent AI" that interacts with the environment over multiple time steps. That would require a perception-planning-action-reward loop. It would allow interaction with other agents through the environment. The agent would be sentient - it has perception and feelings. How could it have feelings? It actually predicts future rewards in order to choose how to act.

So I don't think it is possible to put a lid on it. We'll let it loose in the world in order to act as an agent, we want to have smart robots.

3

AdditionalPizza OP t1_iw0eblq wrote

>It actually predicts future rewards in order to choose how to act.

I do believe some version of this will ring true. It may be required to go beyond prompting for an answer. While that can be powerful on its own, I personally think some kind of self-rewarding system will be necessary. Consequences and benefits.

But, I left it out of this discussion, specifically because a sort of "pre-AGI" won't quite require it I don't think. I think the moment we are legitimately discussing AI consciousness being created, we are beyond initial prototypes.

1

phriot t1_ivz7mt9 wrote

Maybe I'm wrong, but I've always understood AGI to be "a roughly human-level machine intelligence." How can something be roughly human without consciousness and at least the appearance of free will?

0

kaushik_11226 t1_ivzik2s wrote

>How can something be roughly human without consciousness and at least the appearance of free will?

It doesn't have to be human. A intelligence machine that can solve major problem's and discoveries doesn't really need to have a human personality and emotion's.

10

phriot t1_ivzmj0u wrote

I feel like you focused on me leaving "level" out of that sentence, where I included it earlier in my comment. You're basically just saying that your definition of AGI is more literal than the one I use. The point of my comment was just that, up until maybe finding this subreddit, every time I saw AGI used, it had the connotation of consciousness.

It's probably splitting hairs, but it seems like people here just want to call any sufficiently good general piece of software "AGI." Yes, a really great General Artificial Intelligence will help us in many areas, but it's not what I've always understood "AGI" to be.

2

AdditionalPizza OP t1_ivzza75 wrote

The definition of AGI is an AI that can learn any task a human can. Most people presume that would mean the AI would also have to be equal or greater to a human at those tasks.

I don't know where the idea came that AGI has to be conscious. As far as I'm aware that's never been the definition. It's a talking point often associated with AGI and mentioned for Turing Tests, but contrary to your experience I've never heard anyone claim it's a requirement of AGI outside of this sub.

I also see other mixed up definitions in this sub. A lot of people refer to the singularity as the years (or decades) leading up to the actual moment of the singularity.

7

AI_Enjoyer87 t1_ivyvbdz wrote

Proto-AGI/ transformative AI will probably be developed within 12 months imo. The proof of concept of this AI already exists. Innovation and scaling in the next 12 months will likely deliver this capable AI.

15

tatleoat t1_ivyxydk wrote

I've seen Adept AI, nobody can tell me that doesn't satisfy the requirements. If I can control my computer through prompt engineering and write and auto debug code with AI then it's off to the races from there

7

TallAmericano t1_ivyqrhj wrote

Also the year of Linux on the desktop

2

footurist t1_ivyw1fq wrote

Aggressive TLDR : Inadequate definition term

I've read about these "Proto-AGI" definitions before here, but to me these mostly don't make sense.

Perhaps there's debate about the definition of AGI itself, but in general ( heh ) the G in it should imply the ability to ( constrained, because total generality isn't really achievable with our current knowledge I believe, have read ) learn any task that ( continuously aswell ) and like a human would.

The coming up of these definitions chronically lined up with the rise of transformer based LLMs I believe, especially GPT-3. This timeline makes sense.

However, these architectures don't learn like humans do at all. They don't leverage armadas of extremely subtle abstractions like our brains ( the kind of which can be displayed in simple thought experiments, but which I'm too tired to go through here; think carefully about stages in the first time assessing the rules of a roundabout for example ) efficiently do and they don't learn continuously. They're more like impressive data crunchers than efficient abstracters like our brains.

To me it's only logical that this ability to potentially learn each and every task that crosses one's mind and approach human level in it ( again, within the constraints mentioned above ), leveraging efficient transfer learning along the way, were deemed a requirement of this definition, because otherwise the agent wouldn't really be a general learner, but merely a sort of wasteful imitator thereof. That is especially true for the current LLMs, however impressive they are.

So, in conclusion, maybe at the admission of improving the term at hand something in resemblance of what is talked about in this post could indeed surface in the coming year. But as it stands, no imo.

2

AdditionalPizza OP t1_ivyznve wrote

To clarify the definition I'm using a little more, just simply something between Narrow AI and AGI. When it can't be classified as just another narrow AI or several Narrow AI, but also hasn't reached the pinnacle of human ability in every task. It's a very broad range, sure, but something undeniably not just narrow AI.

As for an LLM's ability to learn, I don't have anything on hand at the moment without searching for it, but they've shown success in Reinforcement Learning during pretraining for language models. And the models were able to surpass the abilities of the original algorithms they were pretrained on. I strongly believe RL tied into an LLM will be vastly explored next year and the results will lead to something most would call or strongly resemble a Proto-AGI. Though of course, the term isn't official, it will be the point where people start really considering AGI on a shorter timeframe.

I don't know/think about any public release of this though. Just the existence.

1

footurist t1_ivz26ai wrote

The inadequacy occurs with the usage of the term prototype, which has a reasonably well defined meaning. Basically it serves as an MVP for one or more concepts that are themselves well-defined, so their feasibility and worthiness can be displayed. In the case at hand the concept is true generality of learning as we know it, which the current mainstream paradigm is definitively not capable of. As mentioned before, they might achieve limited imitation thereof, to an extent which is probably quite hard to guesstimate, but never the real thing ( in their current form, evolution can always change the landscape of course, but then they wouldn't be the same thing anymore ).

I recommend some YouTube videos by Numenta. Jeff Hawkins can explain these kinds of things to laymen incredibly well ( he was on Lex's podcast aswell ).

2

AdditionalPizza OP t1_iw00jyx wrote

Your definition of prototype is not the full definition of the word though. Prototype can simply be the inspiration for later models. As in, we're on the right track and probably only adjustments/tweaking/fine-tuning, compute, and data away from being able to create full AGI. I think memory is a hurdle we will over come shortly.

1

footurist t1_iw01b6x wrote

It is, in the sense that it must prove the concept. If it doesn't, it's maybe a precursor of some kind, but not the prototype.

2

AdditionalPizza OP t1_iw02859 wrote

I'm saying 2023 the concept will be proven, we will see a concrete roadmap toward AGI because of the success that SOTA models will achieve.

But I think our very slight difference in 2 basically synonymous words is more pedantic than I feel like debating haha. Precursor and prototype are so similar I see no reason to argue either way.

2

AsheyDS t1_iw0vwj7 wrote

Similar in your estimation. I'm guessing you don't work in a technical field. Proto-AGI is just not a good term and is wildly misleading to the general public and enthusiasts alike, and you're not doing anyone any favors by propagating it. You yourself are a victim of it's effects. All it does is create the sense that we're almost there, and that the current architectures are sufficient for AGI, and that any outstanding 'problems' aren't really problems anymore. That's nothing but pure speculation. We're not even sure if current transformers are on the same spectrum of classification as AGI. Who's to say it's a linear path? Narrow AI, even an interoperable collection of them, may yet hit a wall in terms of capability, and may not be the way forward. We just don't know yet. Nobody is stopping you from speculating, but using this term is highly inaccurate.

2

AdditionalPizza OP t1_iw1b3ou wrote

And people in technical fields aren't notoriously awful at predicting what's best for the general public.

I'm not doing anyone any disservice, and not propogating anything negative here. My post is literally a poll asking people's opinion, and stating my own.

1

ElvinRath t1_iw0bboq wrote

I'm pretty sure that nothing currently in the works will be a proto agi.

Maybe in a few years we have something that can claim to be one, but...your definition seems very wide. I mean, estrictly speak, we could already have proto agis acording to your definition.
We we don't call them that.

2

AdditionalPizza OP t1_iw0da05 wrote

>your definition seems very wide

Yes it is very wide. I believe 2023 will reveal a more concrete "road map" toward full AGI. I think version 1.0 of an AI with sufficiently general capabilities will be released or announced to be arguably defined as proto-AGI.

4

jamesj t1_ivz1i1n wrote

How will you know if it is lacking consciousness/ sentience?

1

AdditionalPizza OP t1_ivz46u6 wrote

I don't know, so I explicitly said that isn't included in my definition.

1

jamesj t1_ivz5768 wrote

Unless I'm misunderstanding you your definition says proto agi would lack consciousness/sentience.

1

AdditionalPizza OP t1_ivz6z70 wrote

As in, it doesn't have a bearing on the definition more so than it must not have it. Basically, let's avoid the subject of that altogether as we have zero idea about it now or in the future.

1

Lawjarp2 t1_iw5s2u1 wrote

A significant chance maybe but not >50%. A great deal of ML teams have been lost, companies that developed open source tools like meta, Google have lost a lot of their stock value. The recession will undoubtedly have an impact.

1

HumpyMagoo t1_iwb7y1n wrote

I think that 2025 will be the year for all that, but in saying that it would be better quality than 2023 version.. but if you mean somewhere in a lab then in that case we already have AGI and it's being kept from the world

1

MrDreamster t1_ivzyyj3 wrote

I don't like this definition because you don't need consciousness nor sentience to qualify an AGI or an ASI.

That being said, I still don't think we'll get proto-AGI in 2023. If I'm being really optimist I'd put my money on the end of this decade for proto AGI, like 2028 maybe, then 2033 for AGI and 2035 for ASI.

0

AdditionalPizza OP t1_iw00z6m wrote

>I don't like this definition because you don't need consciousness nor sentience to qualify an AGI or an ASI.

I said exactly that in the definition. I was defining Proto-AGI, not AGI. No consciousness required.

What's your definition of Proto-AGI that will require 5 years? Would you say our current models are too narrow?

1

MrDreamster t1_iw08jan wrote

Yeah I understood what you meant about consiousness after seeing your other comments.

My estimate is not based on a personal definition of proto-AGI but if I had to define it I'd say proto-AGI would basically be an AGI that can do like 10 different "simple" tasks just as well as a skilled human (Drawing, writing, speaking, coding, singing, solving problems, editing video, controling a car, creating music, and detecting diseases) while still being a single AI and not just an amalgamation of smaller narrow AIs.

An actual AGI would be able to do really anything a skilled human could do instead of just a small amount of various simple tasks and should be able to learn new concepts and how to perform new tasks by itself, and it should be able to edit its own code to improve.

An ASI would basically be an AGI after it has evolved enough time so it can do anything as well as, at least, the collective minds of all the experts on earth on each field imaginable, and will then evolve way beyond what we can imagine right now.

By that logic, we should wait way less time between now and AGI than between AGI and ASI, and I just have a gut feeling that we'll reach ASI by 2035, nothing more, so I have to be a little conservative with how much time we'll reach proto AGI because if we reach proto AGI next year it should already kickstart the creation of an actual AGI which in turn will kickstart its evolution into a proper ASI and it would fast forward my estimates by around 10 years.

1

AdditionalPizza OP t1_iw0ctks wrote

Oh interesting, you think the timeframe between proto-AGI and AGI is shorter than AGI to ASI?

I think by your definition we nearly have that with 2020 language models. We certainly could do it right now. I think 2023 is when we will do that, but it will require a few steps that need to be solved, at least so far as us in the general public assume it needs solved. They're working on it and I would be surprised if our next gen models in 2023 haven't at least been able to solve things like memory. Reinforcement Learning in the pretraining phase has massive potential to bridge the gap between a current narrow scope general AI and full blown generally capable AI that I'd define as proto-AGI.

But I think "strapping" together multiple models would fit the bill too. They aren't narrow AI, they're just not broadly general or capable enough to cover enough bases. We will see how it unfolds though.

1

[deleted] t1_ivygqdj wrote

Someone somewhere already have a fully developed, super AGI ... they are just waiting for the right moment to "release" it into the wild.

I'm thinking Military, Intelligence agencies, Mad scientists ... someone along those lines.

−9

z0rm t1_ivyudzb wrote

Someone has watched too many movies

7

MrDreamster t1_iw000kq wrote

Military, Intelligence agencies, and Mad scientists don't allocate the same budget to their research on AI/AGI/ASI and don't have the same qualified researchers to work for them because they can't pay them as much as big companies, and big companies have not cracked AGI and ASI yet, so this conspirationist statement is just silly. Where did you get that preposterous hypothesis? Did Steve tell you that, perchance? Hmm... Steve...

1