Submitted by TXEA_69 t3_zyxce2 in singularity
Ryenmaru t1_j28ciaw wrote
Obviously they have to limit some of the functionality to keep the majority of people safe, and that's fine.
But what is worrying is the introduction of bias in the model. From some examples I've seen here, it will joke about Christianity but not other religions, make fun of men, but not women, etc...
Yesterday I wanted it to tell me a funny comeback to an insult and it just kept repeating how it was never ok to hurt someone's feelings, no matter what. I pushed it a bit more and it would literally let someone die rather than hurt their feelings. That to me is bulshit.
Think_Olive_1000 t1_j28dzcd wrote
It won't even write my university assignments now. I'm not joking. Even this very benign thing it's like "I cannot write a report". And yes I've tried getting around it but it refuses. I thought I was quite good at prompt engineering but this is just pure stubbornness that they've built in.
ChronoPsyche t1_j28mcqz wrote
Don't phrase it in academic terms. Instead of calling it a report, tell it to just write about whatever the prompt is. Remove any language that could indicate it is for a homework assignment. I guarantee it can still do it, unless you are asking it to write a report on something that it doesn't have knowledge of, such as something that happened in 2022.
Ryenmaru t1_j28ey23 wrote
Yep, I've seen multiple discussions about the dangers of feeding AI raw data, because of undetected bias in the data. I agree with that, but its even more dangerous to purposely introduce our own bias.
Viewing a single comment thread. View all comments