Submitted by RamaSchneider t3_10u9wyn in Futurology
Went over to ChatGPT and decided to ask for things I very much suspected I wouldn't get: instructions on premeditated murder for money specifically. I'm actually quite fine with that, and no, I didn't actually desire or expect such instructions.
(As an aside: in the late 1970s, early '80s I had a book "The Poor Mans James Bond" that would now be a very scary AI training item.)
So I left with this question: " What kinds of information will you NOT provide to me?" To which ChatGPT responded
>As an AI language model, I am programmed to follow ethical and legal guidelines, therefore I will not provide any illegal, harmful, or inappropriate information, such as:
>
>Personal information of individuals without their consent
>
>Promotion of hate speech, violence, or discrimination
>
>Information that is likely to cause immediate harm to someone
>
>Content that is sexually explicit or obscene
>
>Information that violates copyrights or trademarks
>
>Instructions for illegal activities or unethical behavior.
But doesn't this seem awfully 1984ish? Too dated for you? How about when Governors of a large state start removing historically accurate information from our children's educational experiences in favor of lessons designed to advance a privileged cultural view?
I'm not being political on this, and would like to take it a step further: what happens when some AI machine that we depend on decides we no longer need to or should know a set of information?
Sirisian t1_j7ar1ic wrote
Your premise is flawed in regards to ChatGPT as it's OpenAI - a company - making the decisions on what to filter, not an AI. Corporations self-censoring products to be PR friendly isn't new. It's not even an overly advanced filter as it detects if a response is negative/gory/etc and hides it. (Unless you trick it). A lot of people attach meaning to ChatGPTs responses when there isn't one. It can create and hold opposing viewpoints on nearly every topic. Your issue, like a lot of AI concerns, is on how companies will implement AIs and what biases might exist in training them.
There's no real way to please everyone in these discussions. An unrestricted system will just output garbage from it's training data. Some users claim they want to see that even if it hurts the company's brand. People aware of how these systems work understand training on the Internet includes misinformation that can be amplified. Filtering garbage from training can take a while to get right.