Submitted by Particular_Number_68 t3_110679q in singularity
There's one set of people who are ultra optimistic and say that "ChatGPT is AGI". And then there's the other set of people who say AGI is decades away and ChatGPT is just "ELIZA on Steroids" and "word order statistics" at play.
Both arguments are naive. ChatGPT is a great dialog agent which has mastered formal language use upto a human level. But has multiple shortcomings especially related to reasoning and it's lack of groundedness. And it's not AGI. However, it is surely a huge step towards the path to AGI, and general intelligence models will surely require to use a large language model atleast for the part of the intelligent agent to communicate with humans. We need to find things which go on top of these large language models that will remove their shortcomings (just like different parts of our brain perform different functions).
On the other hand, ChatGPT is not just "ELIZA on steroids". ELIZA was a very simple language model. LLMs on the other hand run on Transformers, which don't have inductive biases present in old models like ELIZA. These models don't simply regurgitate words or sentences or just construct random plausible sounding sentences. They have been shown to develop some world models in their internal representations. Of course these world models are not complete. We surely need something extra on top of LLMs (either a better objective function, or trying approaches like combining symbolic reasoning with LLMs). But calling LLMs "ELIZA on steroids" is stupid, especially when these models can do zero shot learning which ELIZA like models cannot.
TopicRepulsive7936 t1_j87durr wrote
Transformers are really close to something like a general learner. If people had that in their meme arsenal they might use it instead of general intelligence.
The guy bringing up Eliza, which is only a weird knowledge flex, should make his own subreddit for his strange fixation.