Submitted by sideways t3_103hwns in singularity
[deleted] t1_j2zm5y6 wrote
[deleted]
Borrowedshorts t1_j30dr0i wrote
Attempting to skip an entire autonomy level was idiotic and has slowed progress in the AV industry immensely. Engineers and business leaders bet big that they could skip L3 autonomy. Well they were wrong.
AsuhoChinami t1_j303sus wrote
Stupid post. Stupid person.
BellyDancerUrgot t1_j30tyxp wrote
Comparing current AI to AGI is laughable. To quote Yoshua Bengio : “current AI algorithms are dumber than a dog” iirc that’s what he said in 2021 in a video interview. None of the leading researchers in the field be it LeCunn or Bengio or Parikh or Hinton etc think we are remotely close to basic human intelligence. Comparing GPT to a human is stupid. It literally parrots information it memorized. Attention and self attention aren’t magic. We are at a stage when AI or rather PI is good enough to understand some context of for some words because it has seen it billions of times. In fact , we aren’t even at a stage where any model can actually completely not hallucinate random things that aren’t true. So it technically doesn’t even understand true context. Ask any worthwhile researcher in the field and they’ll tell u how this article is complete garbage.
There’s an entire branch of ML that focusses on scaling. Irina Rish is one of the big names behind the “scale is all you need” motto. Is she right? Maybe! But even she’ll tell u that we aren’t within reach of the dumbest human being when it comes to intelligence.
marvinthedog t1_j3123ct wrote
If AI algorithms of 2021 were remotely comparable to a dog it seems to me that we are getting really, really, really close.
visarga t1_j30wx6i wrote
> Comparing GPT to a human is stupid. It literally parrots information it memorized.
Can I say you are parroting human language because you are just using a bunch of words memorised somewhere else?
No matter how large is our training set, most word combinations never appear.
Google says:
> Your search - "No matter how large is our training set" - did not match any documents.
Not even these specific 8 words are in the training set! You see?
Language Models are almost always in this domain - generating novel word combinations that still make sense and solve tasks. When did a parrot ever do that?
BellyDancerUrgot t1_j311o8o wrote
No because humans do not hallucinate information and can derive conclusions based on cause and effect on subjects it hasn’t seen before. LLMs can’t even differentiate between cause and effect without memorizing patterns, something humans can naturally do.
And no, human beings in fact do not parrot information. I can reason about subjects I have never studied because human beings do not parrot words and actually understand them rather than memorizing spatial context. It’s like we are back at a stage when people thought we have finally developed AGI back when Goodfellows paper on GANs was published in 2014.
If you actually get off of the hype train u will realize most major industries use gradient boosting and achieve almost the same generalization performance for their needs as an LLM trained with giga fking tons of data. Because they can’t generalize well at all.
[deleted] t1_j34ba6z wrote
[deleted]
BellyDancerUrgot t1_j34m2fe wrote
Totally irrelevant to the conversation. Doesn’t address anything I said.
Viewing a single comment thread. View all comments