Viewing a single comment thread. View all comments

AdditionalPizza t1_j19zzlu wrote

I think you might have the wrong sub. A lot of people here want tech/AI to advance as quickly as possible and are quite optimistic about it. There's some people here that fear humanity is doomed, but the majority probably see the singularity and AGI as a sort of salvation to end current suffering.

9

a4mula OP t1_j1a0opb wrote

So do I, and I am optimistic. Read my history here. I've been on board for years.

I'm beyond excited. I've been hooked into ChatGPT for two weeks now. Hundreds of hours with it.

I'm an ardent supporter of advancing technolgy.

But I also see risks with this technology that aren't being considered by many. Certainly not discussed or conversed about.

It's the way these machines influence us. Can you deny the power technology has provided at shaping ideas and beliefs? To the point of propaganda and marketing. We should all be able to agree that's our reality today.

Those are systems that we're trying to actively prevent as users. We block them, we ignore them. Yet they're still effective. It's why they're worth so much.

These machines? We don't reject. We welcome them with open arms and engage with them in ways that are more intimate than any human you'll ever meet.

Because it understands us in ways no human ever can.

And that's a powerful tool for rapid change in thoughts and behaviors.

Not always in positive ways.

We need time to consider these issues.

2

AdditionalPizza t1_j1a2i1v wrote

No, I hear you. I'm just saying I think this sub in general predicts optimism for salvation over pessimism and doom.

When AI attains a certain level of abilities and intelligence, I think it's a wise concern. I mean, well before the ability to allow the possibility of AI to cause havoc. It's just probably not feasible because it's essentially an arms race and no corporation or government will slow progress willingly.

1

a4mula OP t1_j1a3ibw wrote

I understand. The US just imposed sanctions on China that could potentially have major geoeconomical impact. I'm not ignoring the mountain this idea represents.

But if we're going to have a say, as users in making that climb. It starts now, and we're out of time.

Because even today, right now, with nothing more than ChatGPT, a weaponized form of viral thought control is available to anyone that chooses to use it, any way they see fit.

And while I'm encouraging fair thought, and rationality, and open discussion. Not all will.

Some will use these tools to persuade populations of users towards their own interests.

And I'd rather be climbing that mountain now than down the road when the only proper tools are the ones at the front of the line.

1

AdditionalPizza t1_j1a4zxo wrote

>Some will use these tools to persuade populations

How exactly do you see that happening? Like, in what mechanism? Propaganda?

1

a4mula OP t1_j1a6j3b wrote

It took me about five minutes to get ChatGPT to write a mediocre message of persuasion.

It's not great, but it's fair.

Imagine someone that spends thousands of hours shaping and honing a message with a machine that will give it super human expertise on how to shape the language in a way to maximize persuasion. To shave off the little snags of their particular idoeology from critical thought. To make it rational, and logical, and very difficult to combat in general language.

They could, and the machine would willingly oblige at every step in that process.

You have a weaponized ideology at that point. It doesn't matter what it is.

1

AdditionalPizza t1_j1aeql5 wrote

The internet and social media as a whole is already that powerful. In the near future, I think we may be better off than we have been in combating "fake news" than we have been in the past 10 years. Reason being, people will be much more reluctant to believe anything online because it will likely be presumed as AI. Right now, not enough people are aware of it. In 2 years everyone and their brother will be more than aware of how pervasive AI on the internet is. Each side of the spectrum will assume the other side is trying to feed them propaganda.

That's my 2 cents anyway.

3

a4mula OP t1_j1aga7l wrote

I appreciate it. I want to view as many different perspectives as I can certainly, as it helps to see things in ways that my perspective misses. I do see a path in which the initial systems are embedded with these principles that have been discussed. Logic, Rational thinking, Critical Thinking.

And hopefully that initial training set is enough to embed that behavior in users. So that if down the road they are exposed to less obvious forms of manipulation they're more capable of combating it.

I think OpenAI has done a really great job overall at ensuring ChatGPT adheres mostly to these principles. but that might just be the reflection of the machine that I get, because that's how I try to interact with it.

I just don't know, and I think it's important that we understand these systems more. All of us.

1