Rioghasarig
Rioghasarig t1_jdxs956 wrote
Reply to comment by Cool_Abbreviations_9 in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
I really don't think your experiment makes much sense. Even if we could determine the confidence level of GPT there's no reason to believe asking it for its confidence level is an effective way of determining the actual confidence. As other people have asked the obvious question is "what's your confidence on these confidence reports"? The logic is baseless.
Rioghasarig t1_jdxrp3y wrote
Reply to comment by astrange in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
No they were right about with he base model of GPT. As the base model was trained simply to predict the next word. ChatGPT and GPT4 have evolved beyond that (with things like RLHF).
Rioghasarig t1_j1ew6cy wrote
Are you sure this technology is linked to ChatGPT? It doesn't seem to say that anywhere on that webpage.
Rioghasarig t1_jdz24za wrote
Reply to comment by astrange in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
People were using the base model when it first came out and some people are still using it today. The game AI Dungeon is still runs on what is essentially a transformer trained on next token prediction. So it would be accurate to say "It's just (attempts to) outputs the next most probable word" .