Viewing a single comment thread. View all comments

Cr4zko t1_j5key8v wrote

How do you go from the LLMs of today to full blown AGI in 6 years?

6

red75prime t1_j5khfna wrote

You find a way to make it recurrent (keep state alongside input buffer), add memory (working, as a part of the said state, and long-term), overcome catastrophic forgetting in online learning, find efficient intrinsic motivations. Maybe it's enough.

6

MrEloi t1_j5ku3gu wrote

That is clearly the obvious next step.

I'm not sure how easy that will be 'tho.

It could be that a large 'frozen' model in combination with some clever run-time code and a modicum of short/medium term memory would suffice.

After all, the human brain seems (to me) to be a huge static memory plus relatively little run-time stuff.

3