Submitted by timscarfe t3_yq06d5 in MachineLearning
red75prime t1_ivwmz24 wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
I'll be blunt. No amount of intuition pumping, word-weaving, and hand-waving can change the fact that there's zero evidence of the brain violating the physical Church-Turing thesis. It means that there's 0 evidence that we can't build transistor-based functional equivalent of the brain. It's as simple as that.
Nameless1995 t1_ivxosmw wrote
I don't think Searle denies that so I don't know who you are referring to.
Here's quote from Searle:
> "Could a machine think?"
> The answer is, obviously, yes. We are precisely such machines
> Yes, but could an artifact, a man-made machine think?"
> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question. "OK, but could a digital computer think?" If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
red75prime t1_ivxqxgr wrote
Ah, OK, sorry. I thought that the topic had something to do with machine learning. Exploration of Searle's intuitions is an interesting prospect, but it fits other subreddits more.
Viewing a single comment thread. View all comments