Viewing a single comment thread. View all comments

nevermoreusr t1_je6layp wrote

It's kinda more and less data at the same time. While llms have definitely been trained on more text than any of us ever will read, to get to our teenage ears, we have 10 years of basically non-stop real time stereoscopic video streaming with associated 5 senses plus six or seven years of iterative memory consolidation. (Though our brain is much slower at processing, it is way more flexible and changes on the fly unlike most of our current models).

Maybe what LLMs need right now is multimodalism for graphical and sound inputs as it can infer much more relevant information regarding positioning, world structure and different intuitions.

23

PandaBoyWonder t1_je9r9z2 wrote

yep agreed, ive been saying we need to give it a hard drive, RAM, access to a network time clock, and some sensors to interact with the real world. THEN I think it will start to look more like a human in the way it behaves

0