Ryenmaru

Ryenmaru t1_j28ciaw wrote

Obviously they have to limit some of the functionality to keep the majority of people safe, and that's fine.

But what is worrying is the introduction of bias in the model. From some examples I've seen here, it will joke about Christianity but not other religions, make fun of men, but not women, etc...

Yesterday I wanted it to tell me a funny comeback to an insult and it just kept repeating how it was never ok to hurt someone's feelings, no matter what. I pushed it a bit more and it would literally let someone die rather than hurt their feelings. That to me is bulshit.

−1