Viewing a single comment thread. View all comments

Xavion251 t1_jaboy0k wrote

Since my original comment got banned for not having enough arguments (fair enough to be honest). I'll remix it with the comment I followed up with.

In short, this article is making a lot of completely unjustified assumptions.

Pretty much every proposition seems like a random, unjustifiable leap with no real logical flow.

"Pleasure/pain is required for consciousness"

"Only a biological nervous system could produce these feelings"

"AI does not have intent driving it"

"An AI has nothing to produce these feelings"

These are all just assumptions that can't be verified. Nor can they be logically deduced from any premises.

You could re-arrange the question "Is X conscious?" into "Does X have any subjective experience of anything?".

You cannot possibly know what an AI is or isn't experiencing (up to and including nothing at all i.e. no consciousness). Just as an AI could not possibly know that humans are conscious by studying our brains. To it, our nervous system would just be another "mechanism" for information processing.

How would you know if a self-learning AI does or does not experience pleasure when it does when it's trained to? How would you know if it does or does not perceive it's programming to do XYZ as an "intention" the same way we do?

1