Submitted by IluvBsissa t3_11e7csf in singularity
RabidHexley t1_jad8r8t wrote
Reply to comment by challengethegods in Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
> for example if someone asked you a trick question, and the predictable false answer pops into your head immediately - that's what a single call to an LLM is
Yep. This is the biggest issue with current consumer LLM implementations. We basically force the AI to word-vomit the first thing it thinks of. It's very good at getting things right in spite of that, but if it gets it wrong the system has no recourse. Coming to a correct conclusion, well-reasoned response, or even just coming to the conclusion that we don't know something requires multiple passes.
Viewing a single comment thread. View all comments