Viewing a single comment thread. View all comments

bread93096 t1_it8uqtj wrote

Ah I see. Basically you’re referring to the Chinese black box problem. I’d argue that’s more a problem with our perception than with consciousness itself. It is impossible for us to determine from the outside whether any system is conscious or not. This is true even of other human beings as the p-zombie problem illustrates. But it would certainly be possible for an artificial system to be conscious in fact. We just wouldn’t know about it.

2

wow_button t1_it98z33 wrote

Yeah its analogous to the black box problem, that's a good point. But what I'm saying is that computers are demonstrably a mechanistic black box. I get that maybe that's controversial? But that is literally what computers do. I've read arguments like Tononi's IIT, but the whole 'when its complex and integrated, consciousness happens' does not convince me (though my understanding is admittedly shallow).

I can create a computer program that capitalizes all of the letters or words you type with a few lines of code. Does part of the computer understand what its doing? No. The same way a see-saw does not understand what its doing when you push on the high end and it comes down and the other side goes up. The computer a mechanistic, deterministic machine that happens to be able to do some really cool and complicated stuff.

All other computer programs, including the most sophisticated current AI, are just more complicated versions of my simple program.

1

bread93096 t1_it9a9r9 wrote

The counter argument would be that the human brain is also an amalgam of relatively simple sub-processors, and consciousness is the result of these many sub-processors interacting. It’s supported by the fact that the parts of the brain that are associated with consciousness and sentience develop relatively late in the evolutionary timeline of most intelligent species. However until we can say conclusively how consciousness works in the human brain, we can’t say whether it is possible in an artificial system, and we are not at all close to solving that problem.

3

wow_button t1_it9joo8 wrote

Well said - my reasoning above is why I'm so drawn to Analytic Idealism. I can't get past my own experience with programming to draw the leap that there is some magic number of logic gates, memory and complex processing that emerges into consciousness. Materialism kind of dictates that that must be the case. Panpsychism also appealed (consciousness is fundamental to the material wold), but AI scratches that itch in a much more satisfying way. Ultimately I guess I'm skeptical that a pure materialist perspective will grant us the necessary insights into consciousness necessary to create a compelling AI. Thanks for the article and the convo!

1