tamilupk OP t1_jdvecc3 wrote
Reply to comment by killerfridge in [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
That's an interesting thought, for the example prompts at least I tested without the review prompt, it gave out the same answer unless I add "think step by step" at the end of the question. I will test more on this.
killerfridge t1_jdvid7z wrote
Yeah, I tried the "France" prompt in both ChatGPT4 and Bard, and both failed in the same way (ferret). Bard failed to adjust on review, but in a different way - it claimed whilst it was wrong about the letter, there were no animals that began with the letter 'P', which I did not expect!
Viewing a single comment thread. View all comments