Am I off base thinking it’s silly that a program that just generates text (some of which might be offensive) has to contain a disclaimer that it isn’t “harmless”? Seems like the worst case risk scenario is that it says something that we would hold against a person/be offended by if they said it?
ushtari_tk421 t1_jc6lh0m wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Am I off base thinking it’s silly that a program that just generates text (some of which might be offensive) has to contain a disclaimer that it isn’t “harmless”? Seems like the worst case risk scenario is that it says something that we would hold against a person/be offended by if they said it?