S_XOF

S_XOF t1_j7nvsx4 wrote

Whatever model you're using has to be trained on human speech, ideally the largest possible data pool of human speech available, and that's going to include some bigoted language most likely, especially since you're probably getting that training data from online comments and people tend to be more comfortable being assholes online. You can put in safeguards to try and prevent an AI from saying certain things, but you can't 100% predict what it's going to generate.

6