Viewing a single comment thread. View all comments

Thor4269 t1_j9lgq50 wrote

It will make misinformation significantly easier to make and spread

115

[deleted] t1_j9lnaw3 wrote

[removed]

63

Charlie_Mouse t1_j9omunc wrote

Sadly overall people won’t.

Or at least they may somewhat … apart from the misinformation that happens to agree with their personal politics, preconceptions and prejudices.

3

ButterflyAttack t1_j9m20kr wrote

They probably want to use it to automate ransomware transactions or other scams. That's where the money is. Hardcore russian criminals don't give a fuck about politics or anyone else. They're capitalists, basically.

24

Intelligent-Prune-33 t1_j9mgaai wrote

Criminals don’t.

The kremlin does, and yes, they employ entire divisions of hackers to get shit done

21

Thor4269 t1_j9m386m wrote

With an AI, you can do it all at the same time

Make money, sow discontent, spread misinformation, and manage the ransomware operation all at once

8

CSI_Tech_Dept t1_j9nqr7h wrote

You don't think you can make a lot of money multiplying effectiveness of spreading disinformation for Kremlin?

A lot of regular folks think the ChatGPT finally is an AI that can think. In reality ChatGPT is a tool that can generate text that looks believable as if another human typed it. It is not always correct (people who asked it questions about specific knowledge domain noticed that it often makes stuff up).

Those properties makes it ideal for generating disinformation.

6

Beautiful_Fee1655 t1_j9nw6j6 wrote

Correct. I asked ChaGPT a question about water supplies for space travel, where it incorrectly answered that bringing just hydrogen aboard would be sufficent, because the travelers could make all their water from just the hydrogen gas. Scary that anyone might rely on this pos chatbot for an accurate answer.

4

Charlie_Mouse t1_j9onjl8 wrote

Some of the more technical commentators now refer to Chat GPT as being merely “spicy autocomplete” which is a pretty on-the-nose description.

Which doesn’t stop it from being a useful tool for certain applications and a threat in other areas but it’s nowhere near ‘smart’ - it just rubs something plausible sounding from the huge corpus of writing and comments it’s been fed. A lot of the time for things like actual figures it just plain guesses.

2

RafeDangerous t1_j9osia3 wrote

> Some of the more technical commentators now refer to Chat GPT as being merely “spicy autocomplete” which is a pretty on-the-nose description.

That vastly understates what it does. Chat GPT can be a very convincing conversationalist. It can't always convincingly pass as human, but the fact is that it can make someone feel like they're talking to an intelligent entity and it has the potential to be hugely influential. Take a look in some of the subreddits for AI Chats like Replika or Chai and you'll see plenty of people who treat AIs like actual friends and companions. The potential for someone to take control of these things and use them to subtlety influence people is a very real concern going forward if they become widely used.

2

reverendjesus t1_j9nybqc wrote

The hardcore Russian criminals are the ones in politics.

1

cunt_isnt_sexist t1_j9majhn wrote

Nah, they had to turn Fox and Trump in to their puppets for that. ChatGPT will be used for them to ask "how to use VPN" and "how to turn off geo location on social media".

1

Dont_Panick_ t1_jaaytmd wrote

I feel like the logical end to all of this will need to be AI based information filters. You can't trust the generating side of information anymore, so we need to control what's ingested.

I'd say trying to properly control this may be a defining moment in human history. You could end up having a "Western filter" vs a "Russian filter" and we've logically hit the end state of human information silos.

Ensuring we use fair, open, and auditable filters is the only way to build general trust. But bad actors will try to control their own filter. I believe this is already happening at a smaller scale with China.

1