Submitted by timscarfe t3_yq06d5 in MachineLearning
billy_of_baskerville t1_ivqb6e1 wrote
Reply to comment by waffles2go2 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
>I think the biggest problem with CRA and even Dneprov's game is that it's not clear what the "positive conception" (Searle probably elaborates in some other books or papers) of understanding should be. They are just quick to quip "well, that doesn't seem like understanding, that doesn't seem to possess intentionality and so on so forth" but doesn't elaborate what they think exactly possessing understanding and intentionality is like so that we can evaluate if that's missing.
Well put, I agree.
Viewing a single comment thread. View all comments