4art4

4art4 t1_j27c0vr wrote

The car navigation is a great example, and I will have to have a sit and think about that. That is more or less what I am getting at. The nav AI is updating based on sensor inputs, and plans a route accordingly. ChatGPT does not do this. You can ask it for a plan, and it will generate one. But it never will say to itself "I'm bored. I think I'll try to start a chat with warren_stupidity." Or "maybe I can figure out why 42 is the answer to life the universe and everything."

So... (Just thinking here) maybe what I'm on about is a self-directed thought process. The car nav fails because it only navigates to where we tell it to. ChatGPT fails because it is not doing anything at all between answering questions.

1

4art4 t1_j26ojaj wrote

> unless you are asserting that these activities are essential properties of consciousness.

Yes. A "thinking" machine that does not plan is not "conscious" in my book. How can it be otherwise?

Not so much for dreaming, that i included to point out that when it is not responding to a prompt it is not doing anything. It is not considering the universe or its place in it. It is not wishing upon a star. It is not hoping for world peace (or anything else). It is just unused code in that moment.

1

4art4 t1_j267zzf wrote

Yes and ChatGPT does nothing while it is not in use. It does not day dream, or plan, or anything else. So even if it responds reasonably to questions about its own existence, it is only simulating consciousness.

But... I think if you hooked up 3 ChatGPT systems to talk to each other, and created some sort of feedback routine that it asked itself questions, we would be getting closer. The questions would need motivation somehow. The answers would need to be saved and built on.

6