Ryenmaru
Ryenmaru t1_j28ciaw wrote
Obviously they have to limit some of the functionality to keep the majority of people safe, and that's fine.
But what is worrying is the introduction of bias in the model. From some examples I've seen here, it will joke about Christianity but not other religions, make fun of men, but not women, etc...
Yesterday I wanted it to tell me a funny comeback to an insult and it just kept repeating how it was never ok to hurt someone's feelings, no matter what. I pushed it a bit more and it would literally let someone die rather than hurt their feelings. That to me is bulshit.
Ryenmaru t1_j28ey23 wrote
Reply to comment by Think_Olive_1000 in Potentiality and Capabilities of Chat GPT has been reduced by TXEA_69
Yep, I've seen multiple discussions about the dangers of feeding AI raw data, because of undetected bias in the data. I agree with that, but its even more dangerous to purposely introduce our own bias.