Viewing a single comment thread. View all comments

BellyDancerUrgot t1_j311o8o wrote

No because humans do not hallucinate information and can derive conclusions based on cause and effect on subjects it hasn’t seen before. LLMs can’t even differentiate between cause and effect without memorizing patterns, something humans can naturally do.

And no, human beings in fact do not parrot information. I can reason about subjects I have never studied because human beings do not parrot words and actually understand them rather than memorizing spatial context. It’s like we are back at a stage when people thought we have finally developed AGI back when Goodfellows paper on GANs was published in 2014.

If you actually get off of the hype train u will realize most major industries use gradient boosting and achieve almost the same generalization performance for their needs as an LLM trained with giga fking tons of data. Because they can’t generalize well at all.

1