Submitted by timscarfe t3_yq06d5 in MachineLearning
Mods feel free to delete this if you feel it's inappropriate.
We interviewed Francois Chollet, Mark Bishop, David Chalmers, Joscha Bach and Karl Friston on the Chinese Room argument.
The Chinese Room Argument was first proposed by philosopher John Searle in 1980. It is an argument against the possibility of artificial intelligence (AI) – that is, the idea that a machine could ever be truly intelligent, as opposed to just imitating intelligence.
The argument goes like this:
Imagine a room in which a person sits at a desk, with a book of rules in front of them. This person does not understand Chinese.
Someone outside the room passes a piece of paper through a slot in the door. On this paper is a Chinese character. The person in the room consults the book of rules and, following these rules, writes down another Chinese character and passes it back out through the slot.
To someone outside the room, it appears that the person in the room is engaging in a conversation in Chinese. In reality, they have no idea what they are doing – they are just following the rules in the book.
The Chinese Room Argument is an argument against the idea that a machine could ever be truly intelligent. It is based on the idea that intelligence requires understanding, and that following rules is not the same as understanding.
TL;DR - Chalmers, Chollet, Bach and Friston think that minds can arise from information (functionalists with some interesting distinctions on whether it's causal / strongly emergent etc), Bishop/Searle not, they think there is an ontological difference in "being".
visarga t1_ivnccvy wrote
Putting a LLM on top of a simple robot makes the robot much smarter (PaLM-SayCan). The Chinese Room doesn't have embodiment, was it a fair comparison? Maybe the Chinese Room on top of a robotic body would be much improved.
The argument tries to say that intelligence is in the human, not in the "book". But I disagree, I think intelligence is mostly in the culture. A human alone, who grew up alone, without culture and society, would not be very smart or solve tasks in any language. Foundation models are trained on the whole internet today. They display new skills. Must be that our skills reside in the culture. So a model learning from culture would also be intelligent, especially if embodied and allowed to have feedback control loop.