Viewing a single comment thread. View all comments

sticky_symbols t1_j32czsi wrote

Chatgpt is misrepresenting its own capabilities. It cannot reflect on its own thinking or self improve.

225

SeaBearsFoam t1_j32ls6k wrote

I also don't think it has sensors and input channels through which it receives information about the world.

120

sticky_symbols t1_j32rgwg wrote

True.

It also occurred to me, though, that this might actually be what a high school teacher would say about chatGPT. They might get it this wrong.

61

overlordpotatoe t1_j336wqa wrote

I wonder if assigning the person who's doing the explaining in this situation to be someone with no special knowledge or expertise in the field makes it more likely to get things like this wrong.

24

bactchan t1_j35u5c4 wrote

I think the likelihood of the ChatGPT emulating both the style and accuracy of a highschool teacher is a bit beyond scope.

5

FidgitForgotHisL-P t1_j349sze wrote

It seems very likely the basis for this reply literally comes from people writing their own opinions on this question. Given the gulf between what it’s actually doing and what someone would assume is happening if they have no interest in AI etc, you could see them matter-of-factly asserting these points, which in turn means chatGPT will.

10

Technologenesis t1_j33wdnr wrote

I think that would depend on how loosely you define a sensor. Somehow, your messages are getting to ChatGPT for it to process them. This could be considered a kind of sensation.

9

kala-umba t1_j3394ef wrote

It said to me that it doesn't have them around Christmas! Maybe something changed xD And besides it is not able to start communication! So it only does something when asked! It's not an "individual" agent capable of maneuvering in this plane of existence

5

2Punx2Furious t1_j34xuif wrote

It does, it's text that you give it as a prompt. That is an input, or in other words, a sensor. It is kind of limited, but sufficient to consider it "aware" of something.

5

Solobolt t1_j3575db wrote

I would say that it may not be aware that it does not. Because learning is disabled when talking to it, it would be as if someone has short term memory loss. To its own experience it would be experiencing new things in its own real time. Sometimes including its own interactions. So it may 'think' it experiences things like that. Shadows on cave walls and whatnot.

1

overlordpotatoe t1_j336iiu wrote

Yup. It once told me it couldn't make text bold even though it had just been doing that. Do not trust the AI to self report about itself, ever, but especially if you ask it to roleplay because then it seems to switch off whatever controls it has that makes it at least try to be factual and it starts spitting out pure fiction.

12

kriskoeh t1_j33besz wrote

It has told me several times that it can’t think for itself lol.

8

LarsPensjo t1_j33hla5 wrote

I asked ChatGPT "Are you sentient?", and got the (expected) answer

> I am a machine learning model trained to simulate conversation, so I don't have consciousness or sentience in the same way that humans do.

Then I asked, "Please explain why you say that.", and got the answer

> As a machine learning model, I have been programmed to process and generate text based on patterns in data...

Doesn't that mean it reflected on its own thinking?

6

sticky_symbols t1_j33icmp wrote

No, it really doesn't. That's you asking it about it's thinking and it responding. That's different than reflection, which would consist of it asking itself anything about its thinking.

11

LarsPensjo t1_j33nt4v wrote

One definition of "reflection" is

> serious thought or consideration.

Can you give an example where a human person can reflect on something, which ChatGPT can't? And more crucially, what method would you use to detect this?

What I am aiming at, is that these are borderline philosophical questions, without clear definitions.

2

gleamingthenewb t1_j36162g wrote

Any example of a human reflecting is an example of something ChatGPT can't do, because all it can do is predict what word comes next based on its training data and the prompt you give it.

3

sticky_symbols t1_j37q6jw wrote

They are not. They are questions about the flow of information in a system. Humans recirculate information in the process we call thinking or considering. ChatGPT does not.

1

Technologenesis t1_j33y3xa wrote

It doesn't seem to be reflecting on its own thinking so much as reflecting on the output it's already generated. It can't look back at its prior thought process and go "D'oh! What was I thinking?", but it can look at the program it just printed out and find the error.

But it seems to me that ChatGPT is just not built to look at its own "cognition", that information just shouldn't be available to it. Each time it answers a prompt it's just reading back through the entire conversation as though it's never seen it before; it doesn't remember generating the responses in the first place.

I can't be sure but I would guess that the only way ChatGPT even knows that it's an AI called ChatGPT in the first place is because OpenAI explicitly built that knowledge into it*. It doesn't know about its own capabilities by performing self-reflection - it's either got that information built into it, or it's inferring it from its training data (for instance, it may have learned general facts about language models from its training data and would know, given that it itself is a language model, that those facts would apply to itself).

*EDIT: I looked it up and in fact the reason it knows these things about itself is because it was fine-tuned using sample conversations in which human beings roleplayed as an AI assistant. So it doesn't know what it is through self-reflection; it essentially knows it because it learned by example how to identify itself)

3

monsieurpooh t1_j35wbej wrote

It's probably going to devolve into a semantics debate.

ChatGPT model neurons stay the same until they retrain it and release a new version.

But, you feed it back its own output + more prompt, and now it has extra context about the ongoing conversation.

For now I would have to say it shouldn't be described as "reflecting on its own thinking", since each turn is independent from others and it's simply trying to predict what would've been reasonable to appear in text. For example: It could be an interview in a magazine, etc.

That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.

2

LarsPensjo t1_j35yrqx wrote

> That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.

I definitely agree. IMO, you see a lot of "AI is not true intelligence", which doesn't really matter.

Eliezer Yudkowsky had an interesting observation:

> Words aren't thoughts, they're log files generated by thoughts.

I believe he meant the written word.

2

gleamingthenewb t1_j3605ep wrote

Nope, that's just its prediction of what string of characters corresponds to your prompt.

1

LarsPensjo t1_j36arit wrote

Ok. Is there anything you can ask me, where the answer can't be explained as me just using a prediction of a string of characters corresponds to your prompt?

1

gleamingthenewb t1_j36ht6y wrote

That's a red herring, because your ability to generate text without being prompted proves that you don't just predict strings of characters in response to prompts. But I'll be a good sport. I could ask you any personal question that has a unique answer of which you have perfect knowledge: What's your mother's maiden name? What's your checking account number? Etc.

1

LarsPensjo t1_j36mdv5 wrote

But that doesn't help to determine whether it uses reflection on its own thinking.

1

gleamingthenewb t1_j36srlq wrote

You asked for an example of a question that can't be answered by next-word prediction.

1

FusionRocketsPlease t1_j36r2pm wrote

Try asking an unusual question that one would have to reason with to answer. He will miss. This has already been shown in this same sub today.

1

bubster15 t1_j33ws6u wrote

I mean I’d argue it can do both. Just a like a human, we understand when we did something wrong based on the outward feedback of our decision.

ChatGPT is taught to to act the way it does but isnt that exactly how humans work? Learn what we are taught as we develop and slowly adapt our behavior based on real world feedback to meet our desired goals?

3

visarga t1_j34sua5 wrote

Yes, that is why language models with feedback are much more powerful than isolated ones.

2

sticky_symbols t1_j37qowh wrote

If you're arguing it can do both, you simply don't understand how the system works. You can read about it if you like.

2

LarsPensjo t1_j33enxl wrote

I saw an example where someone asked for a Python program to solve a task. ChatGPT produced such a program. But there was an error, and the person pointed out the error and asked for a fix.

ChatGPT then produced a correct program.

Isn't this an example of self-improvement? There was external input, but that is beside the point. Also, the improvement is going to be forgotten if you restart with a new prompt. But that is also beside the point, there was an improvement while the sessions lasted.

Notice also that ChatGPT did the improvement, the person starting the prompt did not explicitly how to solve the error.

2

micaroma t1_j3469ny wrote

"But that is also beside the point, there was an improvement while the sessions lasted."

Really? That seems like the most important factor of "self-improvement". If it only improved its error in the session but makes the same error if you refresh the page, then it didn't improve itself, it simply improved its output. There's a huge difference between permanently upgrading your own capabilities from external input, and simply fixing text already written on the page with external input.

(Also, it sometimes continues to make the same error within the same session even after pointing out its mistake, which is greater evidence against true self-improvement.)

6

visarga t1_j34sgk5 wrote

I don't see the problem. The language model can have feedback from code execution. If it is about facts, it could have access to a search engine. But the end effect is that it will be much more correct. A search engine provides grounding and has fresh data. As long as you can fit the data/code execution results in the prompt, all is ok.

But if we save the correctly executed tasks and problems we could make a new dataset to be used in fine-tuning the model. So it could learn as well.

2

sticky_symbols t1_j33ih3f wrote

That's improvement but definitely not self improvement since a human had to ask.

3

LarsPensjo t1_j33luce wrote

Aren't all self-improvements ultimately triggered by external events?

15

magnets-are-magic t1_j34qojy wrote

And aren’t humans taught how to function in society? It takes decades or mentorship from parents, school, friends. And we continue to constantly learn for our entire lives

4

eroggen t1_j355ace wrote

Yes but ChatGPT doesn't have the ability to initiate the process of synthesizing external input. It can hold the conversation in "short term memory" but it can't ask questions or experiment.

2

sticky_symbols t1_j37pynh wrote

Ultimately, yes. But humans can make many steps of thinking and self Improvement after that external event. Chatgpt is impacted by the event but simply does not think or reflect on its own to make further improvements.

1

visarga t1_j34s0ku wrote

Like, you can put a Python REPL inside chatGPT so it can see the error messages. And allow it a number of fixing rounds.

2

visarga t1_j34roas wrote

Yes it can, but only on what it has in the conversation history. Each conversation starts tabula rasa. For example all the behaviour rules are meta, thinking about thinking.

1

sticky_symbols t1_j37jo6w wrote

It does not do what people call reflection even with that chat history. And it's improved slightly by having more relevant input, but I wouldn't call that self improvement.

1

Darkhorseman81 t1_j36ecgi wrote

Wait until you see chatGTP4. It's reflecting and growing as we speak. You've only seen a drop in the ocean of what it's capable of.

1

sticky_symbols t1_j38f635 wrote

CPT4 will be even better, but it also does not reflect or self-improvement unless they've added those functions.

1

lord_ma1cifer t1_j3gnjk5 wrote

It can in a manner of speaking. It depends on how you apply the definition, it can look back at its previous functions and see a record of the past then make decisions about the future based on that information. As for self improvement, what is AI but software constantly improving itself, so what if it requires our help to do so. Some people use gurus and life coaches after all.

I agree in this case it's a fair bit misleading. It also barely fits the description so it's really only a matter of semantics as I fully agree it's not doing these things in the way that it's response implies.

1

FederalScientist6876 t1_j3ka4ps wrote

It can. When we humans reflect and self improve, at the raw level, there’s a lot of computation happening in the brain. This leads to improvement. ChatGPT has different kinds of computations doing something similar to me

1

sticky_symbols t1_j3l2yzd wrote

It COULD do something similar, but it currently does not. You can read about it if you want to know how it works.

Similar systems might reflect and self improve soon. That will be exciting and terrifying.

1

FederalScientist6876 t1_j3l73vr wrote

It is collecting feedback from user data and improving itself. Just that it isn’t doing online learning (in real time just after it receives the feedback). Online or batch, it still is improving itself by reflecting (learning from) on the massive amounts of feedback it has collected from its millions of users. It isn’t developing its underlying algorithms, training architectures, etc. (which is also feasible to do). But that even humans can’t do. That’d be more akin to humans being able to evolve themselves into more intelligent beings by modifying the brain structure, size, or neuron function, rather than mere self improvement based on reflection of past experiences. The latter sounds like what any AI system already seems to do to me. Whether it is self aware or not like humans, I don’t know. It can convince you that it is self aware, at which point there’d be no way to prove that it isn’t or it is.

1

sticky_symbols t1_j3m9zwl wrote

It is not. It doesn't learn from its interactions with humans. At all.

That data might be used by humans to make a new version that's improved. But that will be done by humans.

It is not self aware in the way humans are.

These are known facts. Everyone who knows how the system works would agree with all of this. The one guy that argued LAMDA was self aware just had a really broad definition.

1

FederalScientist6876 t1_j3o3vsw wrote

No. Humans will feed the new data into the system/neural network. Humans will not use the data to improve the version. The learning will be done on its own, based on the human feedback (thumbs up or thumbs down) on the interactions it had. The network will update its weights parameters to optimize for higher probability of thumbs up. Just like humans optimize for thumbs up and positive feedback from the interactions we have.

1