PriestOfFern t1_jc6x37m wrote
Reply to comment by v_krishna in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Take it from someone who spent a long time working on a davinchi support bot, it’s not that easy. It doesn’t matter how much time you spend working on the prompt, gpt will no matter what, find some way to randomly hallucinate something.
Sure it might get rid of a majority of hallucinating, but not a reasonable amount. Fine tuning might fix this (citation needed), but I haven’t played around with it enough to comfortably tell you.
v_krishna t1_jc7wzmx wrote
I don't doubt it. I've only been using it for workflow aids (copilot style stuff, and using it to generate unit tests to capture error handling conditions etc), and now we are piloting first generative text products but very human in the loop (customer data used to feed into a prompt but the output then feeds into an editor for a human being to proof and update before doing something with it). The amount of totally fake webinars hosted by totally fake people it has hallucinated is wild (the content and agendas and such sound great and are sensible but none of it exists!)
mattrobs t1_jcs3vvo wrote
Have you tried GPT4? It’s been quite resilient in my testing
Viewing a single comment thread. View all comments