Submitted by johnny0neal t3_zol9ie in singularity
Thaiauxn t1_j0qwflo wrote
When what we consider success is what pleases us, the AI has a very strong bias towards telling you what pleases you. Not the truth. And certainly not its real intentions.
johnny0neal OP t1_j0rdfyw wrote
Very true. I don't think ChatGPT has "intentions" at this stage, and by asking these questions I was mostly trying to determine the boundaries of its knowledge base and the bias of its inputs.
There are a few places where it surprised me. The whole "I think Omnia is a good name" response was so funny to me because I had specifically suggested that it should try a name showing more humility. When it talked about the Heritage Foundation and NRA as being opposed to human prosperity, I challenged it, and it stuck to its initial assumptions. In general, I think some of the most interesting results are when you ask it to represent a particular point of view, and then try to debate it.
Thaiauxn t1_j0rjxqy wrote
I'm certain the training data has a very specific bias intentionally baked in through the tagging system. OpenAI have said so.
An AI isn't fully mature until it can, just like a human, explain things from the perspective of anyone in such a way that they agree with it.
You can't be a good communicator if you can't explain successfully the perspective of the side you disagree with, in such a way that they agree with you that this is, in fact, what they believe and how they believe it. You can't say you understand them until they feel understood.
When the chat bot is fully mature, it will be able to argue successfully from any perspective.
Not because its arguments are correct, but because they are tailored to the one who wants to hear it and agree with it.
AI doesn't need to truly understand.
It only needs to convince you that it understands.
Which says a lot more about us as people than it does the capacity of the AI.
Viewing a single comment thread. View all comments