Submitted by sideways t3_103hwns in singularity
visarga t1_j30wx6i wrote
Reply to comment by BellyDancerUrgot in 2022 was the year AGI arrived (Just don't call it that) by sideways
> Comparing GPT to a human is stupid. It literally parrots information it memorized.
Can I say you are parroting human language because you are just using a bunch of words memorised somewhere else?
No matter how large is our training set, most word combinations never appear.
Google says:
> Your search - "No matter how large is our training set" - did not match any documents.
Not even these specific 8 words are in the training set! You see?
Language Models are almost always in this domain - generating novel word combinations that still make sense and solve tasks. When did a parrot ever do that?
BellyDancerUrgot t1_j311o8o wrote
No because humans do not hallucinate information and can derive conclusions based on cause and effect on subjects it hasn’t seen before. LLMs can’t even differentiate between cause and effect without memorizing patterns, something humans can naturally do.
And no, human beings in fact do not parrot information. I can reason about subjects I have never studied because human beings do not parrot words and actually understand them rather than memorizing spatial context. It’s like we are back at a stage when people thought we have finally developed AGI back when Goodfellows paper on GANs was published in 2014.
If you actually get off of the hype train u will realize most major industries use gradient boosting and achieve almost the same generalization performance for their needs as an LLM trained with giga fking tons of data. Because they can’t generalize well at all.
[deleted] t1_j34ba6z wrote
[deleted]
BellyDancerUrgot t1_j34m2fe wrote
Totally irrelevant to the conversation. Doesn’t address anything I said.
Viewing a single comment thread. View all comments