Submitted by fangfried t3_11alcys in singularity
sideways t1_j9sw5zv wrote
The inability to experience doubt.
fangfried OP t1_j9swp3o wrote
Maybe insecurity is a sign of self awareness and intelligence
sideways t1_j9swy5f wrote
I would call it a sign of meta-cognition which is something that I don't think LLMs have at the moment.
GuyWithLag t1_j9t3zgd wrote
I get the feeling that LLMs currently are a few-term Taylor series expansion of a much more powerful abstraction; you get glimpses of it, but it's fundamentally limited.
RabidHexley t1_j9u5q7t wrote
Hallucinating seems like a byproduct of the need to always provide output straight away, rather than ruminated on its response before providing an answer to the user. Almost like being forced to always word-vomit. "I don't know" seems obvious, but it's usually the result of multiple recursive thoughts beyond the first thing that comes to mind.
Sort of how we can experience visual and auditory hallucinations simply by messing with our visual input or removing it altogether (such as optical illusions or a sensory deprivation tank). Our brain is constantly making assumptions based on input to maintain functional continuity and thus has no qualms with simply fudging things a bit in the name of keeping things moving. External input processing must happen in real-time so it's the easiest thing to notice when our brain is fucking around with the facts.
LLMs simply do this in text form because that is the base token they function on. It's definitely a big problem. It seems like there needs to be a means for an LLM platform to ask "Does this answer seem reasonable based on known facts? Is this answer based on conjecture or hypotheticals? etc." prior to outputting the first thing it thinks of since it does seem at least somewhat capable of identifying issues with its own answers if asked. Though any attempt to implement this sort of behavior would be difficult with current publicly available models.
throwaway_890i t1_j9txt4j wrote
When it doesn't know the answer it makes shit up that sounds very convincing.
I have found that when you ask "What is wrong with your answer?" when it is talking shit it tells you a problem with its own answer. When it knows the right answer it will be able to tell you what is wrong with the previous answer. I wonder whether this could be used to reduce the amount of shit it talks.
Viewing a single comment thread. View all comments