Submitted by Vegetable-Skill-9700 t3_121agx4 in deeplearning
StrippedSilicon t1_jdt7h5o wrote
Reply to comment by BellyDancerUrgot in Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
So... how does it solve a complicated math problem it hasn't seen before exactly with only regurgitating information?
BellyDancerUrgot t1_jdtci38 wrote
Well let me ask you, how does it fail simple problems if it can solve more complex ones? If you solve these problems analytically then it stands to reason that you wouldn’t be making an error , ever, for a simple question as that.
StrippedSilicon t1_jdte8lj wrote
That's why I'm appealing to "we don't actually understand what it's doing" case. Certainly the AGI-like intelligence explanation falls apart in alot of cases, but the explanation of only spitting out the training data in a different order or context doesn't work either.
Viewing a single comment thread. View all comments