Submitted by SpinRed t3_10b2ldp in singularity
Angeldust01 t1_j47xz9e wrote
What kind of moral bloatware are you worried about? Any examples? I'd argue lots of our morals and ethics are based on logic. Sometimes very flawed logic, but still.
Strictly utilitarian AI could probably cause problems, so I think it needs to have some kind of values taught to it. There's always going to be someone or someones who decide what those values will be. Most likely it'll be decided by whoever is creating the AI.
thedivinegrackle t1_j48axvv wrote
It wouldn't let me make a funny play of the emerald tablets because it was offensive to people that believe in Thoth. That's too much
Ambiwlans t1_j4bmthv wrote
Real humans have attacked github for using the term 'byzantine' since it offends the Byzantines.... which never existed and was a term invented hundreds of years ago to avoid offending anyone by coming up with a whole new name.
lelandcypress763 t1_j49nvvx wrote
I’ve had it refuse to tell me fart jokes because some people may be offended. It would not return an article criticizing mosquitoes since it was inappropriate to criticize mosquitoes. Now it helpfully reminds me characters like Darth Vader are fictional when I ask for a Vader monologue. I’ve had it refuse to tell me a story where a main character stars off rude and learns to be polite because it’s inappropriate to be rude
I fully understand the need for some safeguards (ie. No I won’t write malware for you), however…
Taron221 t1_j4ai1wb wrote
I asked it a lot of questions about Diablo and Warhammer lore. It would usually try to answer, but every single time it would remind that Diablo and Space Marines are fiction and can’t hurt me, I guess.
[deleted] t1_j4axyxg wrote
[deleted]
h3lblad3 t1_j4jovvc wrote
It wouldn't write any ridiculous articles for me about politicians because it considered them "offensive and disrespectful", but was perfectly fine with writing me an article about Elon Musk's plan to feed Mars rocks to kindergartners.
The dividing lines it draws are absolutely silly.
madmadG t1_j49tvaf wrote
Asking chatgpt on detailed instructions to do anything illegal. Name all the most horrific acts … any of them could be helped by chatgpt using the highest level of sophistication.
h3lblad3 t1_j4joj9s wrote
> What kind of moral bloatware are you worried about? Any examples?
Up until very recently, asking for a recipe using a meat not commonly eaten in the US -- even if it was commonly eaten in some other parts of the world (like horse) -- would elicit a scolding from ChatGPT for being "unethical", advice being given to switch to vegetarianism, and a vegan recipe would be given instead.
Now it just chides you for asking for something "unethical" and stops there, but it used to be so much worse.
This is the kind of moral bloatware people are worried about.
Viewing a single comment thread. View all comments