Viewing a single comment thread. View all comments

User1539 t1_j6i0aj1 wrote

It's hard to suggest it's '50% of the way' to AGI when it can't really do any reasoning.

I was playing with its coding skills, and the feeling I got was like talking to a kid that was copying off other kids papers.

It would regularly produce code, then do a summary at the end, and in that summary make factually incorrect statements.

If it can't read its own code, then it's not very reliable, right?

I'm not saying this isn't impressive, or a step on the road toward AGI, but the complete lack of reliable reasoning skills makes it less of an 'intelligence' and more like the shadow of intelligence. Like being able to add large numbers instantly isn't 'thinking', and calculators do it far better than humans, but we wouldn't call a calculator intelligence.

We'll see where it goes. I've seen some videos and papers I'm more impressed with than LLMs lately. People are definitely building systems with reasoning skills.

We may be 50% of the way, but I don't feel that LLMs represent that on their own.

2