Viewing a single comment thread. View all comments

tiorancio t1_j6fzd1y wrote

We can pretend that ChatGPT is still not over 50% AGI. But come on, it is, It can do everything better than 50% of the population, already. "but it won't do whatever" well your next door neighbour also won't. We're comparing it to "us" smart people, but given the right interface it can outsmart most people any day now. People are getting scammed by russian bots posing as women. By nigerians pretending to be wealthy princes. By african chamans with great psychic powers. These people won't stand a chance against chatGPT as it is today, if only they had the chance to interact with it. We're already there, and the tech companies know it.

15

TeamPupNSudz t1_j6g76rl wrote

> It can do everything better than 50% of the population, already. "but it won't do whatever" well your next door neighbour also won't.

I think that's the nature of the beast at the moment. Goalposts will constantly be moved as we come to better understand the abilities and limitations of this technology, and that's a good thing. Honestly, there's never going to be a moment where we go "aha! We've achieved AGI!". Even 30 years down the road when these things are running our lives, teaching our kids, and who knows what else, a portion of the population will always just see them as an iPhone app that's not "really" intelligent.

6

User1539 t1_j6i0aj1 wrote

It's hard to suggest it's '50% of the way' to AGI when it can't really do any reasoning.

I was playing with its coding skills, and the feeling I got was like talking to a kid that was copying off other kids papers.

It would regularly produce code, then do a summary at the end, and in that summary make factually incorrect statements.

If it can't read its own code, then it's not very reliable, right?

I'm not saying this isn't impressive, or a step on the road toward AGI, but the complete lack of reliable reasoning skills makes it less of an 'intelligence' and more like the shadow of intelligence. Like being able to add large numbers instantly isn't 'thinking', and calculators do it far better than humans, but we wouldn't call a calculator intelligence.

We'll see where it goes. I've seen some videos and papers I'm more impressed with than LLMs lately. People are definitely building systems with reasoning skills.

We may be 50% of the way, but I don't feel that LLMs represent that on their own.

2

GPT-5entient t1_j6jz9w0 wrote

LLMs are still incredibly limited and ONLY operating on text. AGI would be an independent agent that would be able to replace any human task independently. We're still quite far from it. There are whole classes of problems where a 5 year old performs better than ChatGPT.

2