killerfridge
killerfridge t1_jdv0zcm wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
Does it hallucinate less, or does it give a mixture of "correct/incorrect" answer so that it can review itself? After review, does it give more correct answers than just giving it an "assistant" role? It's an interesting route, and it appears GPT4 trips up on the questions given without review from my brief testing
killerfridge t1_jdvid7z wrote
Reply to comment by tamilupk in [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
Yeah, I tried the "France" prompt in both ChatGPT4 and Bard, and both failed in the same way (ferret). Bard failed to adjust on review, but in a different way - it claimed whilst it was wrong about the letter, there were no animals that began with the letter 'P', which I did not expect!