Philience
Philience t1_ivoav4e wrote
Reply to [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
This thought experiment is designed at the classical computational theory where computers did follow simple rules. Modern Machine learning systems are dynamic chaotic systems that do not follow rules as classical computers do. They learn, they make mistakes, they create representations on different levels. In many ways, they are like real biological systems that can understand.
Thus I think the Chinese room can not evoke intuitions relevant to modern or future artificial candidates for intelligence.
Philience t1_ivoywj3 wrote
Reply to comment by timscarfe in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
What part do you think is false?
What do you mean by "simple"? Trying to predict each state of a neural network is almost impossible. Neural Networks are usually seen as prime examples of complex cognitive models (and are not computational models). Even though your algorithm that emulates a neural network in a computer, of course, does computation. Neural Networks do not multiply matrices. Your computer does.
I admit whether they are chaotic or dynamic systems depends on the model and the scale of the model. And most models might be none of the above.