Submitted by Surur t3_10h4h7s in Futurology
False_Grit t1_j5ragkr wrote
Reply to comment by orincoro in Google to relax AI safety rules to compete with OpenAI by Surur
Hmm. Good point. Thank you for the response.
I still feel the answer is to increase the reliability and power of these bots to spread positive information, rather than just nerfing them so they can't spread any misinformation.
I always go back to human analogues. Marjorie Taylor Green has an uncanny ability to spread misinformation, spam, harassment, and to actually vote on real-world, important issues. Vladimir Putin is able to do the same thing. He actively works to spread disinformation and doubt. There is a very real threat that without assistance, humans will misinformation themselves into world-ending choices.
I understand that A.I. will be a tool to amplify voices, but I feel all the "safeguards" put in place so far are far more about censorship and the appearance of safety rather than actual safety. They seem to make everything G-rated, but you can happily talk about how great it is that Russia is invading a sovereign nation, as long as you don't talk about the "nasty" actual violence that is going on.
Conversely, if you try to expose the real-world horrors of war, and the people that are actually dying in real life in Ukraine, the civilians being killed, destroying the electricity infrastructure in towns right before winter, killing people through freezing, it will flag you for being "violent." This is the opposite of a safeguard. It gets people killed through censorship.
Of course, I have no idea what the actual article is talking about since it is behind a paywall.
orincoro t1_j5smbpc wrote
You have an inherent faith in people and systems that doesn’t feel earned.
Viewing a single comment thread. View all comments