Viewing a single comment thread. View all comments

D_Ethan_Bones t1_je6ggtx wrote

This sub is flooded with people who started caring about AI in the past few months and gum up the terminology. People say AGI when they mean ASI, sometimes combining this with the idea that AGI is any minute now.

The latter is based on a much looser definition of AGI which is nowhere near ASI, but saying AGI 2023 Singularity 2023 gets updoots and retoots.

Then there's the people who just bust in here and say "well that's not TRUE AI" - the first time I have seen 'true' be the key term is from... these people.

23

AsuhoChinami t1_je6jr7f wrote

I look forward to AGI in part so that there can be an end to these "anyone who thinks AGI is coming anytime soon is an idiot" posts.

14

Dwanyelle t1_je6p7g3 wrote

Heh, we still argue over whether some humans are fully human, were going to be arguing over whether we've achieved AGI or not for a looooong time, unfortunately

5

hungariannastyboy t1_je8ciqk wrote

>we still argue over whether some humans are fully human

uum what?

2

Quentin__Tarantulino t1_je8lewy wrote

There’s a lot of really racist people out there.

2

Dwanyelle t1_jec11wn wrote

Yeah.

Ethnic cleansing, nationalistic ideologies, brutally patriarchal governments, disagreements over when exactly human life begins, historical attitudes towards folks with disabilities, esp cognitive ones, the list goes on.

Take your pick.

1

pig_n_anchor t1_je72e8k wrote

Under my definition (the only correct one), AGI would have the power of recursive self improvement and would therefore very rapidly become exponentially more powerful. So if you start with human level AGI, you will soon reach ASI within months or maybe just a matter hours. Also, even narrow AI is superhuman at the things it can do well. E.g. a calculator is far better at basic arithmetic than any human. If an AI were really a general purpose machine, then I can’t see how it would not be superhuman instantly at whatever it does, if only because it will produce results much faster than a human. For these reason, the definition of ASI collapses into AGI. Like I said, my definition is the only correct one and if you don’t agree with me, you are wrong 😑.

5

drekmonger t1_je73xjv wrote

While the statement that "AGI would have the power of recursive self-improvement and would therefore very rapidly become exponentially more powerful" is a possibility, it is not a required qualification of AGI.

AGI is primarily characterized by its ability to learn, understand, and apply knowledge across a wide range of tasks and domains, similar to human intelligence.

Recursive self-improvement, also known as the concept of an intelligence explosion, refers to an AGI system that can improve its own architecture and algorithms, leading to rapid advancements in its capabilities. While this scenario is a potential outcome of achieving AGI, it is not a necessary condition for AGI to exist.

--GPT4

11

pig_n_anchor t1_je75t91 wrote

AI would say that. Trying to lull us into a fall sense of security!

Edit: AI researchers are already using GPT4 to improve AI. Yes it requires an operator, but more and more of the work is being done by AI. Don’t you think this trend will continue?

1

drekmonger t1_je7cylg wrote

Yes. The trend will continue.

However, I think it's still important to note that recursive self-improvement is not a qualification of AGI, but a consequence. One could imagine a system that's intentionally curtailed from such activities, for example. It could still be AGI.

2

pig_n_anchor t1_je7mw6c wrote

I agree. I'm just saying that anything that could rightly be called AGI will almost certainly have that capability. I suppose it's theoretically possible to have one that can't improve itself, but considering how good it is at programming already, I see it as very unlikely.

1

Ortus14 t1_je78a9u wrote

The first AGI will be an ASI because Ai and computers already have massive advantageous over humans. So for all practical purposes AGI and ASI are synonomouse.

1