Viewing a single comment thread. View all comments

RabidHexley t1_j9u5q7t wrote

Hallucinating seems like a byproduct of the need to always provide output straight away, rather than ruminated on its response before providing an answer to the user. Almost like being forced to always word-vomit. "I don't know" seems obvious, but it's usually the result of multiple recursive thoughts beyond the first thing that comes to mind.

Sort of how we can experience visual and auditory hallucinations simply by messing with our visual input or removing it altogether (such as optical illusions or a sensory deprivation tank). Our brain is constantly making assumptions based on input to maintain functional continuity and thus has no qualms with simply fudging things a bit in the name of keeping things moving. External input processing must happen in real-time so it's the easiest thing to notice when our brain is fucking around with the facts.

LLMs simply do this in text form because that is the base token they function on. It's definitely a big problem. It seems like there needs to be a means for an LLM platform to ask "Does this answer seem reasonable based on known facts? Is this answer based on conjecture or hypotheticals? etc." prior to outputting the first thing it thinks of since it does seem at least somewhat capable of identifying issues with its own answers if asked. Though any attempt to implement this sort of behavior would be difficult with current publicly available models.

3