Viewing a single comment thread. View all comments

HEAT_IS_DIE t1_j6n2ttm wrote

 One thing that irks me in the philosophical debate about consciousness is that it's always considered as some magical otherworldly thing. Not being able to solve the "hard problem of consciousness", one guy turns to panspsychism where everything has a consciousness (so nothing is explained really), or it is some emergent attribute that arises from mere living matter, as if living matter itself isn't special.

  To me, consciousness seems to be a biological fact among, pretty verifiably, many animals. So there is likely evolutionary benefits in being conscious to various degrees. And it makes sense: when there's a complicated life form, it's easier for it to make quick decicisions with a central hub that controls most of the functions instantaneously. If it just reacted with other systems unaware what others are doing, it could lead to contradictionary courses of action.

 Anyway, what the philosophical accounts of the ontological nature of consciousness rarely seem to address, is that it is something that has developed over time, concurrently with others, and in an environment that is partly social, partly hostile, and requires sometimes quick decicisions to be made in order to ensure survival. It is not a magical metaphysical quirk in the universe.

  So at last, regarding artificial consciousness: I can't escape the feeling that the framework for it to happen needs to have some of the same elements present that natural evolution of consciousness had:

  1. need for self-preservation,

  2. need to react to outside stimuli

  3. others

List isn't probably exhaustive, but these are my thoughts and just wanted to put them somewhere.

9

Magikarpeles t1_j6n6fsg wrote

I think the hard problem is more about being unable to prove or disprove someone else’s phenomenological experience of being conscious (at least how I understand it). I think that’s quite relevant to the discussion about whether or not the AI is “conscious”. Unlike humans and animals the AI isn’t constantly processing and thinking and feeling, just when it’s tasked with something.

If consciousness is an emergent property then it’s possible for the AI to be conscious in its own way while its “thinking”. But the point stands that it’s not possible to access someone or something’s subjective experience, so we can only ever speculate.

4

HEAT_IS_DIE t1_j6ngdon wrote

I think it is not a problem unless you make it so. Of course we can't exactly know what's going on in someone else's experience, but we know other experiences exist, and that they aren't all drastically different when biological factors are the same.

I still don't understand what is so problematic about not being able to access someone else's experience. It just seems to be the very point of consciousness that it's unique to every individual system, and that you can't inhabit another living thing without destroying both. Consciousness reflects outwards. It is evident in reactions. For me, arguing about consciousness totally outside reality and real world situations is not the way to understand the purpose and nature of it. It's like thinking about whether AI will ever grow a human body and if we will be able to notice when it does.

5

jamesj t1_j6obala wrote

It may not be the case that there is a strong correlation between consciousness and evidence of consciousness. Your claim that it is obvious which other entities are conscious and which are not is a huge assumption, one that could be wrong.

5

wow_button t1_j6ogc1e wrote

I like your point of need for preservation, react to stimuli and others as necessary but I'll posit that we can already do that with computers. Need for preservation is an interesting phrase, because I can create an evolutionary algorithm that rewards preservation. But 'need' implies desire. And we have no idea how to make a computer program desire anything. React to outside stimuli - this can be emulated on a computer, but there is nothing with any sense of 'outside' and 'inside'. Others as necessary - see previous for problem with 'others'. Necessary is also problematic, because it implies desire or need?

If you can teach me how to make a computer program feel pain and pleasure, then I agree you can create ai that is sentient. If you can't, then no mater how interesting, complex, seemingly intelligent the code behaves, I don't see how you can consider it conscious.

0

TheRealBeaker420 t1_j6np3b8 wrote

I fully agree with what you're saying. In philosophy it's often described as something physical, and so it stands to reason that it would leave physical evidence. It's difficult to observe the brain while it's still working, but that doesn't make the mind fundamentally inaccessible.

The biggest problem, though, is that it's just not very well defined. In some contexts it's been defined by reaction, as you mentioned, though that definition has to be refined for more complicated applications (e.g. in anesthesiology where awareness might remain when reactions are suppressed.) Phenomenal experience and qualia are terms usually used to narrow the topic down to the mind-body problem, but even they have multiple definitions, and some of these definitions can lead to the conclusion that qualia don't even exist.

3

tkuiper t1_j6o6zes wrote

I think that's a recipe for familiar versions of consciousness. With Pansychism, what consciousness feels like can vary radically. Concepts of emotion or even temporal continuity are traits of only relatively large and specialized consciousness. I like to point out that even as a human you experience a wide array of levels and version of consciousness. When waking up or black out drunk for example.

1