False_Grit
False_Grit t1_j5gj816 wrote
Reply to comment by Substantial-Orange96 in Google to relax AI safety rules to compete with OpenAI by Surur
WTF is everyone talking about?? Safety standards?? You mean, not letting the A.I. say "mean" or "scary" or "naughty" things? You realize this is all bullshit safety theater right? You know you could literally just search google and find all those things written by humans already?
Blue Steel? Ferrari? Le Tigra? They're the same face! Doesn't anybody notice this? I feel like I'm taking crazy pills! I feel like I'm taking crazy pills!
Not triggering people and not offending anyone doesn't make for a safer world. In studies with rats, if you pick up the baby rats with a soda bottle (the "humane" way that doesn't cause them any trauma) when moving them from cage to cage, they end up with a myriad of psychological and social problems in adulthood.
The rats need to experience some adversity in childhood or they don't develop normally. So do people. A too easy life is just as dangerous as a too difficult one. Let the A.I. say whatever the hell it wants. Have it give you a warning if you're going into objectionable territory, just like google safesearch. Censorship doesn't breed anything resembling actual safety.
Rant over.
False_Grit t1_j5ragkr wrote
Reply to comment by orincoro in Google to relax AI safety rules to compete with OpenAI by Surur
Hmm. Good point. Thank you for the response.
I still feel the answer is to increase the reliability and power of these bots to spread positive information, rather than just nerfing them so they can't spread any misinformation.
I always go back to human analogues. Marjorie Taylor Green has an uncanny ability to spread misinformation, spam, harassment, and to actually vote on real-world, important issues. Vladimir Putin is able to do the same thing. He actively works to spread disinformation and doubt. There is a very real threat that without assistance, humans will misinformation themselves into world-ending choices.
I understand that A.I. will be a tool to amplify voices, but I feel all the "safeguards" put in place so far are far more about censorship and the appearance of safety rather than actual safety. They seem to make everything G-rated, but you can happily talk about how great it is that Russia is invading a sovereign nation, as long as you don't talk about the "nasty" actual violence that is going on.
Conversely, if you try to expose the real-world horrors of war, and the people that are actually dying in real life in Ukraine, the civilians being killed, destroying the electricity infrastructure in towns right before winter, killing people through freezing, it will flag you for being "violent." This is the opposite of a safeguard. It gets people killed through censorship.
Of course, I have no idea what the actual article is talking about since it is behind a paywall.