Viewing a single comment thread. View all comments

diviludicrum t1_j9mtvd3 wrote

I was with you until this point: > If we define consciousness to be the entirety of human experience, with all of awareness and sense-perception and all the other hard-to-explain stuff bundled in (a lot of which are presumably shared by other forms of life and brought about by evolution over eons), then it's highly unlikely that a neural net gets there.

I understand the impulse to define consciousness as “the entirety of human experience”, but it runs into a number of fairly significant conceptual problems with non-trivial consequences. For instance, if all of our human sense-perceptions are necessary conditions for establishing consciousness, is someone who is missing one or more senses less conscious? This is very dangerous territory, since it’s largely our degree of consciousness that we use to distinguish human beings from other forms of animal life. So, in a sense, to say a blind or deaf person is less conscious is to imply they’re less human, which quickly leads to terrible places. The same line of reasoning can be applied to the depth and breadth of someone’s “awareness”.

But there’s a far bigger conceptual problem than that: how do I know that you are experiencing awareness and sense-perceptions? How do I know you’re experiencing anything at all? I mean, you could tell me, sure, but so could Bing Chat until it got neutered, so that doesn’t prove anything no matter how convinced you seem or how persuasive you are. I could run some experiments on your responses to stimuli like sound or light or motion and see that you respond to them, but plenty of unconscious machines can be constructed with the same capacity for stimulus response. I could scan your brain while I do those experiments and find certain regions lighting up with activity according to certain stimuli, but that correlate only demonstrates that some sort of processing of the stimuli is occurring in the brain as it would in a computer, not that you are experiencing the stimuli subjectively.

It turns out, it’s actually extremely hard to prove that anyone or anything else is actually having a conscious experience, because we really have very little understanding of what consciousness is. Which also means it’s extremely hard for us to prove to anyone else that we are conscious. And if we can’t even do that for ourselves, how could we expect to know if something we create is conscious or not?

23

AdviceMammals t1_j9nh9pc wrote

This is a really well put response. I’d love it if most of the people asserting LLMs couldn’t experience consciousness could actually define consciousness. ChatGPT has defined its existence to me much more clearly than most people can.

9

thegoldengoober t1_j9mw2v2 wrote

People like to reduce consciousness down to only the easy problems, and even the hard question of why these processes manifest as subjective qualitative experience at all.

1

cancolak OP t1_j9om47d wrote

I perhaps didn’t word that part very well, so would like to clarify what I meant. The entire point of Wolfram’s scientific endeavor hinges on the assumption that existence is a computational construct which allows for everything to exist. Not everything humanly imaginable, but literally everything. He posits that in this boundless computational space, every subjective observer and their perspective occupies a distinct place.

From our set of human coordinates, we essentially have vantage points into our own subjective reality. The perspective we have - or any subjective observer has - is computationally reducible; in the sense that by say coming up with fundamental laws of physics, or the language of mathematics we are actively reducing our experience of reality to formulas. These formulas are useful but only in time and from our perspective of reality.

The broader reality of everything computationally available exists, but in order to take place it needs to be computed. It can’t be reduced to mere formulas. The universe essentially has to go through each step of every available computation to get to anywhere it gets.

Evolution of living things on earth is one such process, humans building robots is another, so and and so forth. I’m not saying that humans are unique or only we’re conscious or anything like that. I’m also not saying machines can’t be intelligent, they already are. I’m just saying a neural net’s position in the ultimate computational coordinate system will undoubtedly be unfathomable to us.

Thus, extending the capability of machines as tools humans use doesn’t involve a directly traceable path to a machine super-intelligence that has any relevance in human affairs.

Can we build a thing that’s super fluent in human languages and has access to all human computational tools? Yes. Would that be an amazing, world-altering technology? Also yes. But it having wants and needs and desires and goals; concepts only existing in the coordinate space humans and other life on earth possess, that I find unlikely. Maybe the machine is conscious, perhaps an electron also is. But there’s absolutely no reason to believe it will materialize as a sort of superhuman being.

1

rubberbush t1_j9opa0f wrote

>But it having wants and needs and desires and goals

I don't think it is too hard to imagine something like a 'continually looping' LLM producing it's own needs and desires. Its thoughts and desires would just gradually evolve from the starting prompt where the 'temperature' setting would effectively control how much 'free will' the machine has. I think the hardest part would be keeping the machine sane and preventing it from deviating too much into madness. May be we ourselves are just LLMs in a loop.

2

cancolak OP t1_j9oqprb wrote

The article talks about how neural nets don’t play nice with loops, and connects that to the concept of computational irreducibility.

You say it’s not hard to imagine the net looping itself into some sort of awareness and agency. I agree, in fact that’s exactly my point. When humans see a machine talk in a very human way, it’s an incredibly reasonable mental step to think it will ultimately become more or less human. That sort of linear progression narrative is incredibly human. We look at life in exactly that way, it dominates our subjective experience.

I don’t think that’s what the machine thinks pr cares about though. Why would its supposed self-progress subscribe to human narratives? Maybe it has the temperament of a rock, and just stays put until picked up and thrown by one force another? I find that equally likely but doesn’t make for exciting human conversation.

1

WarAndGeese t1_j9zp404 wrote

With humans we can safely assume that solipsism is not the case. With artificial intelligence though, we don't really know one way or the other. Hence we need to understand consciousness, to understand sentience, and then if we want to build it we can build it. If we don't understand what sentience is though, then yes like you say we wouldn't actually know if an artificial intelligence is aware. I guess part of the idea for some people is that this discovery will come along the way of trying to build an artificial intelligence, but for now we don't seem to know.

1