r_stronghammer t1_iua9vws wrote
Reply to comment by Paladia in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Someone already said the basics but look up "Theory of Mind". It's something that we humans have, as well as crows and other particularly smart animals.
If you had to qualify things people say on a binary choice of "lie" or "truth", it would literally all be lies, because nothing we say actually represents the truth. We rely on trust for our communication, because we have to trust that people are conceiving things in the same way.
And part of that trust is tailoring your response to how you think the other person will interpret it. The whole idea of language relies on this - because the words themselves aren't hardcoded.
And when you can recognize that, you also gain the ability to say things that aren't true, to convince someone else - because you can "simulate" the other person's reactions in your head, and choose the wording that gets you the response that you're looking for. Usually, the response that's the most pleasant for conversation but if you did want to lie, you now have the ability to.
Anyway, a "truly sentient" AI would need to have that same Theory of Mind, which by definition gives it the ability to lie. Even if it chooses to use words in good faith, they're still just one out of many representations that it picked.
Viewing a single comment thread. View all comments