Submitted by Particular_Number_68 t3_110679q in singularity

There's one set of people who are ultra optimistic and say that "ChatGPT is AGI". And then there's the other set of people who say AGI is decades away and ChatGPT is just "ELIZA on Steroids" and "word order statistics" at play.

Both arguments are naive. ChatGPT is a great dialog agent which has mastered formal language use upto a human level. But has multiple shortcomings especially related to reasoning and it's lack of groundedness. And it's not AGI. However, it is surely a huge step towards the path to AGI, and general intelligence models will surely require to use a large language model atleast for the part of the intelligent agent to communicate with humans. We need to find things which go on top of these large language models that will remove their shortcomings (just like different parts of our brain perform different functions).

On the other hand, ChatGPT is not just "ELIZA on steroids". ELIZA was a very simple language model. LLMs on the other hand run on Transformers, which don't have inductive biases present in old models like ELIZA. These models don't simply regurgitate words or sentences or just construct random plausible sounding sentences. They have been shown to develop some world models in their internal representations. Of course these world models are not complete. We surely need something extra on top of LLMs (either a better objective function, or trying approaches like combining symbolic reasoning with LLMs). But calling LLMs "ELIZA on steroids" is stupid, especially when these models can do zero shot learning which ELIZA like models cannot.

11

Comments

You must log in or register to comment.

Effective-Dig8734 t1_j87bxas wrote

I think it’s still reasonable to say agi is decades away , I think the right comparison for someone saying chatgpt is agi, would be someone saying agi is still centuries away

0

TopicRepulsive7936 t1_j87durr wrote

Transformers are really close to something like a general learner. If people had that in their meme arsenal they might use it instead of general intelligence.

The guy bringing up Eliza, which is only a weird knowledge flex, should make his own subreddit for his strange fixation.

14

Tiamatium t1_j87eh3g wrote

Reddit is full of 15 years olds, are you surprised the debate here has all the level of nuance a 15 years old brain can handle?

11

TinyBurbz t1_j87xyi6 wrote

From ChatGPT itself:

>GPT, or Generative Pretrained Transformer, is a type of language model developed by OpenAI. It is not considered an Artificial General Intelligence (AGI), but rather a specialized AI system designed to generate text based on patterns learned from large amounts of training data. > >GPT uses deep learning techniques, specifically the Transformer architecture, to generate text based on patterns it has learned from a large corpus of text data. The model is trained using a process called unsupervised learning, where it is exposed to a vast amount of text and learns to predict the next word in a sequence based on the context of the words that came before it. > >While GPT has achieved remarkable results in generating text that is coherent and semantically meaningful, it is still limited in its capabilities. It does not have the ability to reason, understand the world, or perform tasks that are not directly related to generating text. These limitations are a key factor in why GPT is not considered an AGI. > >In summary, GPT is a predictive model that is specialized in generating text based on patterns learned from large amounts of training data. While it has achieved remarkable results in its domain, it does not have the general intelligence capabilities that are associated with AGI.

Take the bot's own word for it

6

DukkyDrake t1_j897fzr wrote

>it is surely a huge step towards the path to AGI

No such thing is assured, unless maybe if you're referring to a compositional AGI system. Everything stands on its own merits. Don't discount the possibility that you're subject to the same bias blind spots as those you accuse.

2

Villad_rock t1_j89ynbr wrote

Did you copy the text from all the other ones who posted the exactly same thing?

1

Iffykindofguy t1_j8d8tnc wrote

Nothing as naive as thinking this is the first time this exact post has been made lol

1

LeCodex t1_ja91q12 wrote

I'm not sure that "formal language use" means what you think it means.

Moreover, it's (ironically enough) naïve to presume that ChatGPT is a "huge" step toward AGI when all it is is a very good narrow AI (yes, plausible dialog is still a narrow task. It denotes a skill and skills aren't general intelligence). You wouldn't say that AlphaZero was a huge step on the path to AGI because it was so much better than Stockfish at the time. Why is it then different once dialog is involved ?

2

Particular_Number_68 OP t1_jabcuie wrote

When I talk about "formal language use" I refer to the term in context of the paper https://arxiv.org/pdf/2301.06627.pdf. Why is it a huge step towards AGI? Because a system that has general intelligence will be a system that has mastered language use (both formal and functional as referred to in the paper). Interestingly the very limitations of current LLMs such as hallucinations and poor logical reasoning can be solved via LLMs themselves by a process known as Autoformalization (https://arxiv.org/pdf/2205.12615.pdf ). They teach an LLM to translate natural language to a "formal" language (a computer program basically). They translate to a language called Isabelle which is used for math proof verification. What would this enable? Imagine you give an LLM a math problem and ask it to solve it. If you have an agent that can tell whether the solution of the LLM is correct or not, you can use this setting to train the LLM via reinforcement learning. Autoformalization acts as that agent where the solution given by the LLM is converted from natural language to Isabelle and verified by the Isabelle software program. If the output is correct the LLM can be given a positive reinforcement, if it wrong it can be given a negative reinforcement. Who will do this translation? An LLM itself! How is this connected to AGI? Well you can induce reasoning into language models that way. Because pretty much any real world problem (albeit some due to the incompleteness theorem) will have a certain set of axioms and the solution to the problem can be proved in a mathematical sense. This will allow LLMs to master functional language use as well, and would make the LLMs more grounded.

The beauty of LLMs is the fact that they tend to bridge the gap between a natural language and a formal computer program. This along with their few shot learning capabilities indeed show that LLMs are indeed a huge leap towards AGI.

1