Viewing a single comment thread. View all comments

BellyDancerUrgot t1_jdtci38 wrote

Well let me ask you, how does it fail simple problems if it can solve more complex ones? If you solve these problems analytically then it stands to reason that you wouldn’t be making an error , ever, for a simple question as that.

1

StrippedSilicon t1_jdte8lj wrote

That's why I'm appealing to "we don't actually understand what it's doing" case. Certainly the AGI-like intelligence explanation falls apart in alot of cases, but the explanation of only spitting out the training data in a different order or context doesn't work either.

1