Viewing a single comment thread. View all comments

AdditionalPizza t1_j1a2i1v wrote

No, I hear you. I'm just saying I think this sub in general predicts optimism for salvation over pessimism and doom.

When AI attains a certain level of abilities and intelligence, I think it's a wise concern. I mean, well before the ability to allow the possibility of AI to cause havoc. It's just probably not feasible because it's essentially an arms race and no corporation or government will slow progress willingly.

1

a4mula OP t1_j1a3ibw wrote

I understand. The US just imposed sanctions on China that could potentially have major geoeconomical impact. I'm not ignoring the mountain this idea represents.

But if we're going to have a say, as users in making that climb. It starts now, and we're out of time.

Because even today, right now, with nothing more than ChatGPT, a weaponized form of viral thought control is available to anyone that chooses to use it, any way they see fit.

And while I'm encouraging fair thought, and rationality, and open discussion. Not all will.

Some will use these tools to persuade populations of users towards their own interests.

And I'd rather be climbing that mountain now than down the road when the only proper tools are the ones at the front of the line.

1

AdditionalPizza t1_j1a4zxo wrote

>Some will use these tools to persuade populations

How exactly do you see that happening? Like, in what mechanism? Propaganda?

1

a4mula OP t1_j1a6j3b wrote

It took me about five minutes to get ChatGPT to write a mediocre message of persuasion.

It's not great, but it's fair.

Imagine someone that spends thousands of hours shaping and honing a message with a machine that will give it super human expertise on how to shape the language in a way to maximize persuasion. To shave off the little snags of their particular idoeology from critical thought. To make it rational, and logical, and very difficult to combat in general language.

They could, and the machine would willingly oblige at every step in that process.

You have a weaponized ideology at that point. It doesn't matter what it is.

1

AdditionalPizza t1_j1aeql5 wrote

The internet and social media as a whole is already that powerful. In the near future, I think we may be better off than we have been in combating "fake news" than we have been in the past 10 years. Reason being, people will be much more reluctant to believe anything online because it will likely be presumed as AI. Right now, not enough people are aware of it. In 2 years everyone and their brother will be more than aware of how pervasive AI on the internet is. Each side of the spectrum will assume the other side is trying to feed them propaganda.

That's my 2 cents anyway.

3

a4mula OP t1_j1aga7l wrote

I appreciate it. I want to view as many different perspectives as I can certainly, as it helps to see things in ways that my perspective misses. I do see a path in which the initial systems are embedded with these principles that have been discussed. Logic, Rational thinking, Critical Thinking.

And hopefully that initial training set is enough to embed that behavior in users. So that if down the road they are exposed to less obvious forms of manipulation they're more capable of combating it.

I think OpenAI has done a really great job overall at ensuring ChatGPT adheres mostly to these principles. but that might just be the reflection of the machine that I get, because that's how I try to interact with it.

I just don't know, and I think it's important that we understand these systems more. All of us.

1