Comments
theBarneyBus t1_j1j0r2p wrote
The programmers don’t program question-answer pairs. Rather, they program a tool that “reads” millions of online articles, then tries to replicate responses that sound how the tool thinks an response about X would sound.
This often means that it gets things (surprisingly) correct, but can also lead to it sounding extremely confident, while having funky and/or misleading information.
its-octopeople t1_j1ja27h wrote
I've been keeping an eye out for bot accounts using ChatGPT here on Reddit. I caught one the other day confidently claiming there were 'aluminum lounge bars' where you could be served a range of aluminum based drinks.
nekokattt t1_j1jf01c wrote
I prefer copper drinks. It has that nice after taste.
Purplekeyboard t1_j1j5pdk wrote
ChatGPT is essentially just a text predictor.
It is trained on basically all the text on the internet, and it uses this to learn what words tend to follow what other words. It's very powerful and sophisticated, to the point where it can write proper english sentences which are on topic and which are (mostly) accurate.
So if you say, "Where was Elvis Presley born?", it predicts that after this text would generally come text which gives the answer to the question, and that's the text it gives you. And because it has been trained on the text of the entire internet, it knows the answer to this question.
If you say, "Please write me a brief essay on the difference between capitalism and socialism", it predicts how such an essay would likely start, then writes that text. Then predicts how such an essay would likely continue, then writes that text. And so on, until the essay is completed. As it's been trained on the text of the internet, it has large volumes of text in its training material about capitalism and socialism and the differences between them.
ChatGPT is specifically trained to be a chat bot, and it probably has multiple censorship routines and "be a nice chatbot" routines which identify when your prompt or its own writing is something against its rules.
phiwong t1_j1j4kes wrote
" What is it that lets them put their faith in the model given that the model is something purely mathematical and not instinctual? "
The counter question should be asked. Your question implies that instinct is a better way to develop trust in knowledge rather than logic and mathematics? Why do you believe so? Can you appraise your own "knowledge base" and determine how much of what you believe you know, you developed through actual observation and consideration? How much of what you think you know is "borrowed" from someone else's experience and knowledge? Why do you trust your knowledge in this circumstance? How do you make this evaluation?
One example, most people "know" that 1+1 = 2. I am confident that most people don't know how to prove this. Yet they believe this to be true? Why? It cannot be instinct, surely?
Flair_Helper t1_j1jj4sm wrote
Please read this entire message
Your submission has been removed for the following reason(s):
Questions about a business or a group's motivation are not allowed on ELI5. These are usually either straightforward, or known only to the organisations involved, leading to speculation (Rule 2).
If you would like this removal reviewed, please read the detailed rules first. If you believe this submission was removed erroneously, please use this form and we will review your submission.
drafterman t1_j1j0wx2 wrote
There is no confidence that ChatGPT will provide accurate answers. That isn't even the goal of ChatGPT.
ChatGPT is essentially a language prediction model. You provide a prompt. Then, using all of its immense database of collected text, plus its machine learning algorithms, generates what it things should come after that prompt. But it has no conception of what is factually true, it only has strings of information.
For example, if you prompt it with "What is 2 + 2?" It will probably say 4. Not because it us doing a mathematical calculation, or understands what math is, or because it knows 4 is right, but because in all of its training data the text "2 + 2" is overwhelming followed by the text "4".
In fact, more sophisticated models can actually be more prone to giving less correct answers in some situations as illustrated here:
https://www.youtube.com/watch?v=w65p_IIp6JY