Submitted by Cool_Abbreviations_9 t3_123b66w in MachineLearning
Peleton011 t1_jdvtqq0 wrote
Reply to comment by SkinnyJoshPeck in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Unless I'm wrong somewhere LLMs work with probabilities, they output the most likely response based on training.
They definitely could be able to show you how likely of a response a given paper is, and given that the real papers would be part of the training set answers it's less sure of are going to statistically be less likely to be true.
RageOnGoneDo t1_jdxm91o wrote
Why are you assuming it's actualyl doing that calculation, though?
Peleton011 t1_jdxolt1 wrote
I mean, i said LLMs definetely could do that, i never intended to convey that that's what's going on in OPs case or that chatgpt specifically is able to do so.
RageOnGoneDo t1_jdxoqxf wrote
How, though? How can an LLM do that kind of statistical analysis?
Viewing a single comment thread. View all comments