Submitted by sailhard22 t3_1134rru in singularity
The reason I believe that AI will not pass the Turing test—and why the Turing test is flawed—is because AI responses are so intelligent, knowledge-filled and accurate that it will be obvious that the response is coming from an AI and not a human.
I can already spot ChatGPT-generated text in the wild because it’s so artificially accurate (at least grammatically) that it’s unnatural.
In other words, AI is too smart to pass a Turing test, and it would need to dumb itself down dramatically in order to convince someone that it was human.
What would be the point of dumbing down AI? It’s a fruitless exercise.
AsheyDS t1_j8o1aso wrote
You're thinking about it the wrong way. It's not too smart, it just seems that way because it's quite verbose and you relate that to intelligence. If it were more intelligent, it would be both succinct and also considerate of whom it's interacting with. If the goal were to sound like a human and pass the turing test, it would take the things you mentioned into consideration when formulating a response, and it would seem to 'dumb down' and format its responses in a more natural-sounding way. But that isn't the goal, and it's not intelligent enough on its own to consider that.
Personally, I think the turing test is pointless anyway, because even as verbose and unnatural as the responses can be, people are still willing to believe it's sentient and embodies all the qualities of a human. Or to put it another way, we failed it already and have to come up with alternate ways of testing it.