Submitted by spiritus_dei t3_10tlh08 in MachineLearning
---AI--- t1_j7au2sj wrote
Reply to comment by Myxomatosiss in [D] Are large language models dangerous? by spiritus_dei
The Chinese room experiment is proof that a Chinese room can be sentient. There's no difference between a Chinese room and a human brain.
> It doesn't consider the context of the problem because it has no context.
I do not know what you mean here, so could you please give a specific example that you think ChatGPT and similar models will never be able to correctly answer.
Myxomatosiss t1_j7budz6 wrote
If you truly believe that, you haven't studied the human brain. Or any brain, for that matter. There is a massive divide.
Ask it for a joke.
But more importantly, it has no idea what a chair is. It has mapped the association of the word chair to other words, and it can connect them together in a convincingly meaningful way, but it only has a simple replication of associative memory. It's lacking so many other functions of a brain.
Viewing a single comment thread. View all comments