Submitted by timscarfe t3_yq06d5 in MachineLearning
waffles2go2 t1_ivq3j0p wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
It's vague and wishy-washy because it's a working hypothesis that is looking for something better to replace it or those to augment it.
I'll agree it's primary weakness is the vague definition between what makes something "thinking" and what is simply a long lookup table but this is the key question we need to riff on until we converge on something we can agree upon....
This is a way more mature approach to driving our thinking than this basic "maths will solve everything" when we feely admit we don't understand the maths....
billy_of_baskerville t1_ivqb6e1 wrote
>I think the biggest problem with CRA and even Dneprov's game is that it's not clear what the "positive conception" (Searle probably elaborates in some other books or papers) of understanding should be. They are just quick to quip "well, that doesn't seem like understanding, that doesn't seem to possess intentionality and so on so forth" but doesn't elaborate what they think exactly possessing understanding and intentionality is like so that we can evaluate if that's missing.
Well put, I agree.
Viewing a single comment thread. View all comments