Viewing a single comment thread. View all comments

RabidHexley t1_jad8r8t wrote

> for example if someone asked you a trick question, and the predictable false answer pops into your head immediately - that's what a single call to an LLM is

Yep. This is the biggest issue with current consumer LLM implementations. We basically force the AI to word-vomit the first thing it thinks of. It's very good at getting things right in spite of that, but if it gets it wrong the system has no recourse. Coming to a correct conclusion, well-reasoned response, or even just coming to the conclusion that we don't know something requires multiple passes.

3