Viewing a single comment thread. View all comments

JacksCompleteLackOf t1_je1fbnr wrote

GPT4 is certainly an incremental step over 3,2 and 1, a lot of that was predictable. It's good to see that it hallucinates a lot less than it used to.

I see lots of psychology and business types talking about how we are almost at AGI, but where are the voices of the people actually working on this stuff? LeCun? Hinton? Even Carmack?

I do agree that it's getting closer to where it will replace some jobs. That part isn't hype.

8

Zetus t1_je1t0e5 wrote

Funny enough I actually spoke to Yann LeCun in person this past Friday at https://phildeeplearning.github.io/ at NYU, he essentially argued that a world model is required for solving some of the problems we're currently running into, during the debate. Later on I spoke with him and he is essentially expressing that the current naive approaches are not capable of engendering the proper dynamics- I have a recording of the talk/debate I took, I'll upload it later today :)

Listen to the scientists not the hype marketers!

Here are a copy of the slides for his talk: https://twitter.com/ylecun/status/1640133789199347713?s=19

Edit: uploaded the video here: (https://youtu.be/Cdd9u2WG3qU)

14

FoniksMunkee t1_je3705l wrote

Microsoft may have agreed. In the paper they released that talked about "sparks of AGI" - they identified a number of areas that LLM's fail at. Mostly forward planning and leaps of logic or Eureka moments. They actually pointed at LeCun's paper and said that's a potential solution... but that suggests they can't solve it yet with the ChatGPT approach.

3

datalord t1_je2mkh9 wrote

The “Sparks of AGI” paper mentioned above is literally published by Microsoft who researched it alongside OpenAI.

This paper, published yesterday, is published by OpenAI themselves discusses just how many people will be impacted. His Twitter post summarises it well.

Sam Altman recently spoke to Lex around the power and limits of what they have built. They also discuss AGI. Suffice to say, those working on it are talking about it at length.

5

JacksCompleteLackOf t1_je2z152 wrote

I hadn't seen the OpenAI paper before, but it states it's about the coming decades; and that makes the Twitter thread more interesting because one of the authors is putting a hard date on 2025 for some of those innovations.

It's pretty easy to find flaws in the Microsoft Research paper. It's funny that they hype up its performance on coding interviews, but don't mention that it falls down on data that it hasn't been trained on explicitly: https://twitter.com/cHHillee/status/1635790330854526981

Admittedly, I'm probably more skeptical than most.

5

FoniksMunkee t1_je37buh wrote

I'm pretty sure they mentioned something like that in passing didn't they? I know they have a section in there talking about how it fails at some math and language problems because it can't plan ahead, and it can't make leaps of logic. And it considered these substantial problems with ChatGPT4 with no obvious fix.

4

JacksCompleteLackOf t1_je389eh wrote

Actually, I think you're right and they did mention it. I guess I wish they would have emphasized that aspect more than the 'sparks of general intelligence'. It's mostly a solid paper for what it is. They admit they don't know what the training data looks like. I just wish they would have left that paragraph about the sparks out of it.

1

FoniksMunkee t1_je38yix wrote

Yes, I agree. The paper was fascinating - but a lot of people took away from that the idea that AGI is essentially here. When I read it I saw a couple of issues that may be a speed bump in progress. They definitely underplayed what seems to be a difficult problem to solve with the current paradigm.

2

datalord t1_je4ep43 wrote

Logic leaps, if rational, are a not leaps because we do not perceive the invisible steps between them. A machine can make those jump with sufficient instruction and/or autonomy. It’s just constant iteration and refinement along a specific trajectory.

If irrational, then much harder, perhaps that’s what creativity is in some respects.

1

Northcliff t1_je2zwnu wrote

John Carmack doesn’t get enough love in this sub

4