Viewing a single comment thread. View all comments

Sirisian t1_j7ar1ic wrote

Your premise is flawed in regards to ChatGPT as it's OpenAI - a company - making the decisions on what to filter, not an AI. Corporations self-censoring products to be PR friendly isn't new. It's not even an overly advanced filter as it detects if a response is negative/gory/etc and hides it. (Unless you trick it). A lot of people attach meaning to ChatGPTs responses when there isn't one. It can create and hold opposing viewpoints on nearly every topic. Your issue, like a lot of AI concerns, is on how companies will implement AIs and what biases might exist in training them.

There's no real way to please everyone in these discussions. An unrestricted system will just output garbage from it's training data. Some users claim they want to see that even if it hurts the company's brand. People aware of how these systems work understand training on the Internet includes misinformation that can be amplified. Filtering garbage from training can take a while to get right.

107

hesiod2 t1_j7feg3e wrote

This can be solved by having a default setting which the user can override. For example by default google hides sexual materials but that setting can easily be changed by the user. Adults then make their own decisions on setting and decide for their children what they want them to see.

According to Sam Altman: “we are working to improve the default settings to be more neutral, and also to empower users to get our systems to behave in accordance with their individual preferences within broad bounds. this is harder than it sounds and will take us some time to get right.”

Source: https://twitter.com/sama/status/1620927984797638656?s=46&t=iyZErcajcVCp5w0iAm_08A

3

orincoro t1_j7fr90k wrote

Those settings are also driven by machine learning. You’re thinking in a linear way, but neural networks don’t work like that.

All of this is nonsensical. Altman has to define what is “neutral.” But this is an orthogonal value; not an objective characteristic. What’s neutral to you isn’t neutral to me. The bloody minded technocracy of these companies is utterly fucking maddening. They’ll replace human driven decision making and the definition of mortality and ethics themselves in the hands of programs. And believe me: the people who will benefit are the people who own and control those programs.

1

orincoro t1_j7fqwud wrote

Absolutely disagree. The purpose of neural networks is to establish connections in an organic way. You can use certain heuristics to get the machine to form connections in certain ways, but your ability to guide its learning is limited by the fact that you will never know in detail what all the nodes are for or how they actually work. There is no possibility of analyzing a neural network in the sense that we can understand machine code.

This is why neural networks can degrade if not trained properly. Companies like Google and Facebook don’t have as much control over their systems as they would like you to think.

2

RamaSchneider OP t1_j7ardb4 wrote

I'm not pleased or displeased ... life holds uncertainties with and without AI.

I'm curious. This is the future we'll be living in, and we better figure out how to drive the beast before the beast learns how to drive us. My assumption it would be child's play to base an AI's decision making on a commercial marketing manual of some sort.

Bad? I'm not judging. Something to be aware of and alert for? Absolutely.

−9

Sirisian t1_j7atnhi wrote

> My assumption it would be child's play to base an AI's decision making on a commercial marketing manual of some sort.

Again those are influences not from an AI, but from the corporation that produces them. Controlling what corporations do is what regulation is for.

> Bad? I'm not judging. Something to be aware of and alert for? Absolutely.

I wish others took that same view and simply studied and discussed the problems. Too often on r/ChatGPT people jump to wild conclusions.

21

RamaSchneider OP t1_j7auj2f wrote

The heart of my argument in this specific thread is: I agree that right now we're providing the basis for AI learning. That will most probably not be true in the future simply because the ability of computers to collect, collate and distribute any info dwarfs that of humans.

Yes, today you are correct. My point is that I don't believe that lasts. (And yes - I do think the evidence supports me)

−2

Vorpishly t1_j7bgjv9 wrote

What evidence shows your point though?

7

RamaSchneider OP t1_j7exarz wrote

All you have to do is track computer data gathering and dissemination since the 1950s. We have Wall Street transactions that a human would never have time to be aware of.

Every bit of evidence we have regarding computers screams that we're nowhere near the end of the line.

2

Isabella-The-Fox t1_j7eidnk wrote

AI eventually being able to decide for itself is pure speculation. Us humans build it, we control what it do. Right now we have AI that "writes" code, infact it's run through open AI. It's called github copilot. I put write in quotes for a reason. This code it writes is just a algorithm taking from github, meaning if the AI tried to write code for itself, it'd run into errors and flaws (And has run into errors and flaws while being used. Source: I had a free trial). A AI will never be fully intelligence even when it seems like it is. Once it seems like it is, it really still isn't, at least compared to a human being. Us humans will always dictate AI

1

Zer0D0wn83 t1_j7az6sv wrote

The assumption you're making here is more appropriate to a world where we only have a few AIs, all run by either the gov or big tech. This won't be the case. Stability AI is creating all sorts of AI systems, and they are all completely open source. Emad Mostaque's (CEO) interview with Peter Diamandis is 100% worth checking out to find out more on that.

In short - we'll have all sorts of AI with vastly different filters/controls run by all sorts of companies/charities/organizations.

2

Wild_Sun_1223 t1_j7bvu7m wrote

Will that assumption actually hold, though? What will prevent any one from outcompeting the rest and its controllers thus monopolizing power?

10

Zer0D0wn83 t1_j7bwiz4 wrote

It's uncharted territory, so I can't say for sure. I was just pointing out that it's not currently like that, which gives me hope for the future.

3