Viewing a single comment thread. View all comments

Liberty2012 t1_j9u06qk wrote

The hallucination problem seems to be a significant obstacle that is inherit in the architecture of LLMs. Their application is going to be significantly more limited than the current hype as long as that remains unresolved.

Ironically, when it is resolved, we get a whole lot of new problems, but more in the philosophical space.

1

strongaifuturist OP t1_j9u28ig wrote

That's absolutely right. The current LLMs don't have an independent world model per se. They have a world model, but it's more like a sales guy trying to memorize the words in a sales brochure. You might be able to get through a sales call, but its a much more fragile strategy than trying to first have a model of how things work and then figure out what you're going to say based on that model and your goals. But there is lots of work in this area. LLMs of today are like planes in the time of Kitty Hawk. Sure they have limitations, but the concept has been proven. Now it's only a matter of time before the kinks get ironed out.

2

Liberty2012 t1_j9u3ov6 wrote

> Now it's only a matter of time before the kinks get ironed out.

Yes, that is the point of view of some. However, it is not the point of view of all. Meaning that if this is a core architecture problem of LLMs, it will not be solvable without a new architecture. So, yes it can be solved, but it won't be an LLM that solves it.

But yes, I'm more concerned about the implications of what comes next when we do solve it.

1

strongaifuturist OP t1_j9u8es5 wrote

I’m not saying that architectural changes aren’t needed. The article outlines some of the alternatives being explored. My favorite is one from Yann LeCun based on a technique called H-JEPA.

1