Submitted by possiblybaldman t3_11a9j56 in singularity
DillyDino t1_j9slxkg wrote
Large Language Models are amazing, but they have massive, massive gaps compared to a true AGI. We still don’t have a good way of augmenting their memory units. But man are people trying at least. Toolformer is the latest paper I read that attacks this idea.
They fundamentally still struggle with common sense reasoning in a similar way a deep learning model in a car is struggling to bridge the gap in common sense reasoning. And we’ve hit a bit of a wall there. So to speak. We haven’t solved this well. More self attention layers and reinforcement learning guiding won’t do it. GPT4 will be impressive but 96 layers of transformers becoming 1000 of them or whatever still is just a bigger function approximation. Extrapolating when we solve that missing piece is still just guess work. That’s why I’m amazed when people say AGI will just get solved by 2030 because of an advancement in LLM’s
Viewing a single comment thread. View all comments