currentscurrents t1_jc3sfua wrote
Reply to comment by yaosio in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Humans aren't going to have perfect laws everywhere, but it's still not the AI's place to decide what's right and wrong.
In practice, AI that doesn't follow local laws simply isn't going to be allowed to operate anyway.
yaosio t1_jc3tjpe wrote
In some countries pro-LGBT writing is illegal. When a censored model is released that can't write anything pro-LGBT because it's illegal somewhere, don't you think there would cause quite an uproar, quite a ruckus?
In Russia it's illegal to call their invasion of Ukraine a war. Won't it upset Ukranians that want to use such a model to help write about the war when they find out Russian law applies to their country?
currentscurrents t1_jc3w4ez wrote
>Won't it upset Ukranians that want to use such a model to help write about the war when they find out Russian law applies to their country?
Unless there's been a major movement in the war since I last checked the news, Ukraine is not part of Russia.
What you're describing sounds like a single universal AI that looks up local laws and follows them blindly.
I think what's going to happen is that each country will train their own AI that aligns with their local laws and values. A US or European AI would have no problem criticizing the Russian government or writing pro-LGBT text. But it would be banned in Russia and Saudia Arabia, and they would have their own alternative.
Viewing a single comment thread. View all comments