Viewing a single comment thread. View all comments

a4mula t1_j0oikkv wrote

I don't claim to know the technical apsects of how OpenAI handles the training of the their models.

But from my perspective it feels like a really good blend of minimizing content that can be ambiguous. It's likely, though again I'm not an expert, that this is inherent in these models, after all they do not handle ambiguous inputs as effectively as they would things that can be objectively stated and refined and precisely represented.

We should be careful of any machine that deals with subjective content. While ChatGPT is capable of producing this content if it's requested, it's base state seems to do a really great job of keeping things as rational, logical, and fair as possible.

It doesn't think after all, it only responds to inputs.

1