WarmSignificance1 t1_je6jpjt wrote
Humans are trained on a fraction of the data that LLMs are. That actually does matter, because it begs the question: what are LLMs missing?
It doesn’t inherently mean that you can’t get a very powerful system with the current paradigm, but it does mean that you may be missing a better way of doing things.
nevermoreusr t1_je6layp wrote
It's kinda more and less data at the same time. While llms have definitely been trained on more text than any of us ever will read, to get to our teenage ears, we have 10 years of basically non-stop real time stereoscopic video streaming with associated 5 senses plus six or seven years of iterative memory consolidation. (Though our brain is much slower at processing, it is way more flexible and changes on the fly unlike most of our current models).
Maybe what LLMs need right now is multimodalism for graphical and sound inputs as it can infer much more relevant information regarding positioning, world structure and different intuitions.
PandaBoyWonder t1_je9r9z2 wrote
yep agreed, ive been saying we need to give it a hard drive, RAM, access to a network time clock, and some sensors to interact with the real world. THEN I think it will start to look more like a human in the way it behaves
drekmonger t1_je74aq3 wrote
Also noteworthy, we "train" and "infer" with a fraction of the energy cost of running an LLM, and that's with the necessary life support and locomotive systems. With transformer models, we're obviously brute forcing something that evolutionary biology has developed more economical solutions for.
There will come a day when GPT 5.0 or 6.0 can run on a banana peel.
naum547 t1_je7ml6j wrote
LLMs are trained exclusively on text, so they excel at language, basically they have an amazing model of human languages and know how to use them, what they lack for example is a model of the earth, so they fail at using latitude etc. same for math, the only reason they would know 2 + 2 = 4 is because they read enough times that 2 + 2 = 4, but they have no concept of it. If they would be trained on something like 3d objects they would understand that 2 things + 2 things make 4 things.
Andriyo t1_je8q3s6 wrote
I'd argue that humans are trained on more data and the majority of it comes from our senses and the body itself. The texts that we read during our lifetime are probably just a small fraction of all input.
Viewing a single comment thread. View all comments