Viewing a single comment thread. View all comments

LarsPensjo t1_j33hla5 wrote

I asked ChatGPT "Are you sentient?", and got the (expected) answer

> I am a machine learning model trained to simulate conversation, so I don't have consciousness or sentience in the same way that humans do.

Then I asked, "Please explain why you say that.", and got the answer

> As a machine learning model, I have been programmed to process and generate text based on patterns in data...

Doesn't that mean it reflected on its own thinking?

6

sticky_symbols t1_j33icmp wrote

No, it really doesn't. That's you asking it about it's thinking and it responding. That's different than reflection, which would consist of it asking itself anything about its thinking.

11

LarsPensjo t1_j33nt4v wrote

One definition of "reflection" is

> serious thought or consideration.

Can you give an example where a human person can reflect on something, which ChatGPT can't? And more crucially, what method would you use to detect this?

What I am aiming at, is that these are borderline philosophical questions, without clear definitions.

2

gleamingthenewb t1_j36162g wrote

Any example of a human reflecting is an example of something ChatGPT can't do, because all it can do is predict what word comes next based on its training data and the prompt you give it.

3

sticky_symbols t1_j37q6jw wrote

They are not. They are questions about the flow of information in a system. Humans recirculate information in the process we call thinking or considering. ChatGPT does not.

1

Technologenesis t1_j33y3xa wrote

It doesn't seem to be reflecting on its own thinking so much as reflecting on the output it's already generated. It can't look back at its prior thought process and go "D'oh! What was I thinking?", but it can look at the program it just printed out and find the error.

But it seems to me that ChatGPT is just not built to look at its own "cognition", that information just shouldn't be available to it. Each time it answers a prompt it's just reading back through the entire conversation as though it's never seen it before; it doesn't remember generating the responses in the first place.

I can't be sure but I would guess that the only way ChatGPT even knows that it's an AI called ChatGPT in the first place is because OpenAI explicitly built that knowledge into it*. It doesn't know about its own capabilities by performing self-reflection - it's either got that information built into it, or it's inferring it from its training data (for instance, it may have learned general facts about language models from its training data and would know, given that it itself is a language model, that those facts would apply to itself).

*EDIT: I looked it up and in fact the reason it knows these things about itself is because it was fine-tuned using sample conversations in which human beings roleplayed as an AI assistant. So it doesn't know what it is through self-reflection; it essentially knows it because it learned by example how to identify itself)

3

monsieurpooh t1_j35wbej wrote

It's probably going to devolve into a semantics debate.

ChatGPT model neurons stay the same until they retrain it and release a new version.

But, you feed it back its own output + more prompt, and now it has extra context about the ongoing conversation.

For now I would have to say it shouldn't be described as "reflecting on its own thinking", since each turn is independent from others and it's simply trying to predict what would've been reasonable to appear in text. For example: It could be an interview in a magazine, etc.

That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.

2

LarsPensjo t1_j35yrqx wrote

> That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.

I definitely agree. IMO, you see a lot of "AI is not true intelligence", which doesn't really matter.

Eliezer Yudkowsky had an interesting observation:

> Words aren't thoughts, they're log files generated by thoughts.

I believe he meant the written word.

2

gleamingthenewb t1_j3605ep wrote

Nope, that's just its prediction of what string of characters corresponds to your prompt.

1

LarsPensjo t1_j36arit wrote

Ok. Is there anything you can ask me, where the answer can't be explained as me just using a prediction of a string of characters corresponds to your prompt?

1

gleamingthenewb t1_j36ht6y wrote

That's a red herring, because your ability to generate text without being prompted proves that you don't just predict strings of characters in response to prompts. But I'll be a good sport. I could ask you any personal question that has a unique answer of which you have perfect knowledge: What's your mother's maiden name? What's your checking account number? Etc.

1

LarsPensjo t1_j36mdv5 wrote

But that doesn't help to determine whether it uses reflection on its own thinking.

1

gleamingthenewb t1_j36srlq wrote

You asked for an example of a question that can't be answered by next-word prediction.

1

FusionRocketsPlease t1_j36r2pm wrote

Try asking an unusual question that one would have to reason with to answer. He will miss. This has already been shown in this same sub today.

1