v_krishna
v_krishna t1_jc4orxw wrote
Reply to comment by rePAN6517 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
The consistent with the world type stuff could be built into the prompt engineering (e.g., tell the user about a map you have) and I think you could largely minimize hallucination but still have very realistic conversations
v_krishna t1_j3rfc1k wrote
Reply to comment by MrEloi in [D] Found very similar paper to my submitted paper on Arxiv by [deleted]
I can definitely imagine Newton posting this, "that fucker Leibiniz gets his name on the notation it's not fair!"
v_krishna t1_jc7wzmx wrote
Reply to comment by PriestOfFern in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
I don't doubt it. I've only been using it for workflow aids (copilot style stuff, and using it to generate unit tests to capture error handling conditions etc), and now we are piloting first generative text products but very human in the loop (customer data used to feed into a prompt but the output then feeds into an editor for a human being to proof and update before doing something with it). The amount of totally fake webinars hosted by totally fake people it has hallucinated is wild (the content and agendas and such sound great and are sensible but none of it exists!)