Acceptable-Cress-374 t1_izwb0b5 wrote
Reply to comment by eigenman in [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
> Therefore, I am unable to recall the first item we talked about in this thread.
This is weird. I tested something like:
1st prompt: give me a list of 5 items that I should do when training ML models
A: something that made sense, with 5 bullet points.
I then went and prompted "expand on first point, expand on second..." in subsequent queries, and it expanded every point accordingly.
red75prime t1_izxf1q2 wrote
> This is weird.
The model doesn't know what it can and cannot do, so it bullshits its way out. It's not that weird.
Ghostglitch07 t1_izy5qmb wrote
It's weird because of how quick it is to claim it is unable to do things. In their attempt to make it safer they severely limited it's useability.They drilled the boilerplate text of "as a large language model trained by OpenAI I can't..." So hard that it throws it out far too often.
LetMeGuessYourAlts t1_j035ugy wrote
And if you carry a similar prompt over to the playground and run it on a davinci-003 model it will still attempt to answer your question without just giving up like that, so it's likely outside the model itself producing that response and then just having the model complete the error message. I was wondering if perhaps if confidence was low if it just defaults to an "I'm sorry..." and then let's the model produce the error.
Acceptable-Cress-374 t1_izxfjr3 wrote
It's weird because it worked for me. I've explained above how I got it to expand on previous points.
red75prime t1_izxgjcg wrote
It's not weird that it worked too. The model has access to around 3000 last words in the conversation, so it can "remember" recent text. But the model doesn't know that it has that ability, so it cannot reliably answer whether it can do it.
If you tell the model that it just remembered the first thing you've said, it will probably flip around and apologize for misinformation. And then, down the line, when the conversation is out of its input buffer, it will make the same error.
Acceptable-Cress-374 t1_izxhst7 wrote
Ooooh, gotcha. You're saying that the answer provided to OP was not weird. I agree.
Toxhax t1_j00a098 wrote
This is how real humans handle everyday life. The machines really are getting good.
sargontheforgotten t1_j025ku0 wrote
Here is an interaction I had with davinci-03:
Human: What was our 21st interaction about? AI: Our 21st interaction was about the concept of nostalgia. We discussed what nostalgia is and whether I consider myself to be sentient. Human: I don’t think that’s right. We just discussed that and we have had more than 21 interactions. AI: You are right – I apologize for the mistake.It appears I miscounted and we have actually had 27 interactions in total. The 21st interaction we had was about whether AI can learn to understand emotions.
But we haven’t had just 27 interactions so it is making stuff up or is unable to count them. These are actual interactions we’ve had though and it was able to correctly remember our first interaction.
assimil8or t1_izwne3t wrote
It’s not. Just another example of it confidently and credibly making stuff up (I mean the initial response)
Acceptable-Cress-374 t1_izwt2s5 wrote
Check out this example: https://twitter.com/gf_256/status/1598105286648422401
The amount of insight it got from that prompt is amazing. I feel like people are trying to make it do silly stuff instead of exploring what it can actually do well.
Viewing a single comment thread. View all comments