Viewing a single comment thread. View all comments

phillythompson t1_jb5jysg wrote

And what limitations do you see with LLMs that wouldn’t be “solved” as time goes on?

2

Silly_Awareness8207 t1_jb5knx0 wrote

I'm no expert but the hallucination problem seems pretty difficult

14

[deleted] t1_jb8niwt wrote

I don't see how any other architecture would solve that problem, that's just an issue of how current LLMs are trained

1