Submitted by Ssider69 t3_11apphs in technology
Effective-Avocado470 t1_j9u2mbj wrote
Reply to comment by PacmanIncarnate in Microsoft Bing AI ends chat when prompted about 'feelings' by Ssider69
That's not what I'm worried about, you're right about how people are jumping on the wrong things rn.
The danger is the potential for a malicious propaganda machine to be constructed with these tools and deployed by anyone
PacmanIncarnate t1_j9ug9wn wrote
But we already have malicious propaganda machines and they aren’t even that expensive to use. That’s ignoring the fact that propaganda doesn’t need to be sophisticated in any way to be believed by a bunch of people; we live in a world where anti-vaxxers and flat earthers regularly twist information to support their irrational beliefs. Margery green Taylor recently posted a tweet in which she used three made up numbers to support her argument. There isn’t anything chatGPT or stable diffusion or any other AI can do to our society that isn’t already being done on a large scale using regular existing technology.
Effective-Avocado470 t1_j9ujxpo wrote
It’s scale, that’s what makes AI so scary. You can do exactly the same propaganda techniques, but you can put out 1000x more content that is auto generated. Entire fake comment threads online.
Then they can make deep faked content that says whatever they want. They could convince the world that the president has started nuclear war for example. Deep fake an address, etc. And that’s just one example
Our entire view of reality and truth will change
PacmanIncarnate t1_j9utde0 wrote
We’ve had publicly available deep fake tech for several years now and it has largely been ignored, other than the occasional news story about deep fake porn. The VFX industry was able to make a video of forest gump talking to Nixon decades ago. Since then, few people have taken the time to use that tech for harm. It’s just unnecessary: if you want someone to believe something, you generally don’t have to convince them, you just have to say it and get someone else to back you up. Even better if it confirms someone’s beliefs.
I guess I just think our view of reality and truth is already pretty broken and it didn’t take falsified data.
Effective-Avocado470 t1_j9uu68n wrote
It's still new. The tech isn't quite perfect yet, you can still tell it's fake. So it's mostly jokes for now. The harm will come when you really can't tell the difference. It'll be here sooner than you think, and you may not even notice it happening until it's too late
I agree that many peoples grasp on reality is already slipping, I'm agreeing with you on what's happened so far. I'm saying it'll get even worse with these new tools
Even rational and intelligent people will no longer be able to discern the truth
Viewing a single comment thread. View all comments