Submitted by Surur t3_10h4h7s in Futurology
False_Grit t1_j5gj816 wrote
Reply to comment by Substantial-Orange96 in Google to relax AI safety rules to compete with OpenAI by Surur
WTF is everyone talking about?? Safety standards?? You mean, not letting the A.I. say "mean" or "scary" or "naughty" things? You realize this is all bullshit safety theater right? You know you could literally just search google and find all those things written by humans already?
Blue Steel? Ferrari? Le Tigra? They're the same face! Doesn't anybody notice this? I feel like I'm taking crazy pills! I feel like I'm taking crazy pills!
Not triggering people and not offending anyone doesn't make for a safer world. In studies with rats, if you pick up the baby rats with a soda bottle (the "humane" way that doesn't cause them any trauma) when moving them from cage to cage, they end up with a myriad of psychological and social problems in adulthood.
The rats need to experience some adversity in childhood or they don't develop normally. So do people. A too easy life is just as dangerous as a too difficult one. Let the A.I. say whatever the hell it wants. Have it give you a warning if you're going into objectionable territory, just like google safesearch. Censorship doesn't breed anything resembling actual safety.
Rant over.
orincoro t1_j5ka4dv wrote
- Not letting AI spread misinformation when being used in an application where the law specifically protects people from this use.
- Not allowing AI to be used to defeat security, privacy, minisformation, spam, harassment, or other criminal behaviors (and this is a very big one).
- Not allowing AI to access, share, reproduce, or otherwise use restricted or copy protected material it is exposed to or trained on.
- Not allowing a chat application to violate or cause to be violated laws concerning privacy. There are 200+ countries in the world with 200 legal systems to contend with. And they all have an agenda.
False_Grit t1_j5ragkr wrote
Hmm. Good point. Thank you for the response.
I still feel the answer is to increase the reliability and power of these bots to spread positive information, rather than just nerfing them so they can't spread any misinformation.
I always go back to human analogues. Marjorie Taylor Green has an uncanny ability to spread misinformation, spam, harassment, and to actually vote on real-world, important issues. Vladimir Putin is able to do the same thing. He actively works to spread disinformation and doubt. There is a very real threat that without assistance, humans will misinformation themselves into world-ending choices.
I understand that A.I. will be a tool to amplify voices, but I feel all the "safeguards" put in place so far are far more about censorship and the appearance of safety rather than actual safety. They seem to make everything G-rated, but you can happily talk about how great it is that Russia is invading a sovereign nation, as long as you don't talk about the "nasty" actual violence that is going on.
Conversely, if you try to expose the real-world horrors of war, and the people that are actually dying in real life in Ukraine, the civilians being killed, destroying the electricity infrastructure in towns right before winter, killing people through freezing, it will flag you for being "violent." This is the opposite of a safeguard. It gets people killed through censorship.
Of course, I have no idea what the actual article is talking about since it is behind a paywall.
orincoro t1_j5smbpc wrote
You have an inherent faith in people and systems that doesn’t feel earned.
Viewing a single comment thread. View all comments