visarga t1_iu7nryj wrote
Reply to comment by SlenderMan69 in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Humans fool and lie to themselves all the time, one thing coming to mind is anti-vaxxers protesting vaccines then still going to the hospital when they get sick, or worse, protesting abortion, and then having one in secret.
Similarly, neural nets will learn the training set perfectly but fail on new data, they give you the illusion of learning if you're not careful. That's why in all papers they report the score on a separate block of tests the model has not seen yet. It's a lying, cheating basterd when it comes to learning. This game AI found a clever way to win points without having to do the whole course.
Viewing a single comment thread. View all comments