Submitted by mrx-ai t3_11zi2uq in Futurology
DauntingPrawn t1_jdhi42a wrote
Reply to comment by DragonForg in Could GNNs be the future of AI? by mrx-ai
Complex cognition exists independent of language structures and LLMs mimic language structures, not cognition. You can destroy the language centers of the brain and general intelligence, ie cognition and self recognition, remain intact. Meanwhile ChatGPT isn't thinking or even imitating thought, it's imitating language by computing a latent space for emergent words based on prior language input. Math.
Meanwhile a baby can act on knowledge learned by observing the world long before language emerges. AGI requires more than language, more than memory. It requires the ability to model reality and learn language from raw sensory input alone, and to synthesize information and observation into new ideas, and motives to act on that information, the ability to predict an outcome and a value scale to weigh one potential outcome over another. A baby can do that but ChatGPT doesn't even know when it's spouting utter nonsense and stable diffusion doesn't know how many fingers a human has.
We have no ways of modeling unobserved information. A LLM cannot add a new word to it's model. It will never talk about anything that was invented after its training. Yes, they are impressive. On the level of parlor tricks and street magic.
Viewing a single comment thread. View all comments