Submitted by balancetheuniverse t3_11rc0wa in dataisbeautiful
Empty_Insight t1_jc8xajq wrote
Reply to comment by Jackdaw99 in Exam results for recently released GPT 4 compared to GPT 3.5 by balancetheuniverse
It's an algorithm trained to mimic human output from a prompt. It's only as good as what it was trained on, and people in general suck at hard science and advanced math.
Take, for example, how people 'tricked' ChatGPT into telling them how to make meth. It actually would give them different answers based on how they asked the question, and just in the off-chance that isn't obvious, is not how chemistry works. Also I never saw it give an answer that was actually 'right' in terms of organic chemistry, either for pure pharmaceutical grade methamphetamine (aka Desoxyn) or street meth. It sure seemed right if you didn't understand the actual chemistry. It seemed convincing, even though it was wrong- same thing with calculus, I'm guessing.
Friendly reminder Wolfram Alpha exists if someone is having trouble with calculus. It not only solves the problem, but it shows you how it solved it step-by-step so it's a good study tool too.
Denziloe t1_jc9v9p2 wrote
>It's an algorithm trained to mimic human output from a prompt.
This is an over-simplification, the whole deal with ChatGPT and GPT-4 is that they weren't just trained on huge quantities of unlabelled human text, they were also specifically trained to be "aligned" to desirable properties like truth-telling.
Jackdaw99 t1_jc9244f wrote
But surely it must rate the sources it uses. Besides it seems to be very good at SAT math, which is obviuously easier, but would rely on the same mimicry.
thedabking123 t1_jc93iop wrote
that's not the way that the system works.
You're using symbolic logic, its thinking is more like an intuition- a vastly more accurate intuition than ours, but limited nonetheless.
And the kicker? Its intuition of what words, characters etc. you are expecting to see. It doesn't really logic things out, it doesn't hold concepts of objects, numbers, mathematical operators etc.
It intuits an answer having seen a billion similar equations in the past and guesses at what characters on the keyboard you're expecting to see based on pattern matching.
Jackdaw99 t1_jcawuaf wrote
I can tell your reply wasn't written by GPT. The possessive "its" doesn't take an apostrophe....
jk
thedabking123 t1_jcbbado wrote
lol- it may make the same mistake if enough people on the internet make the mistake... OpenAI uses all web data to train the machine.
Empty_Insight t1_jc94vo9 wrote
Even if the source is 'right,' it might not pick up the context necessary to answer the question appropriately. I would consider the fact that different prompts resulted in different answers to what is effectively the same question might support that idea.
Maybe ChatGPT could actually give someone the answer of how to make meth correctly if given the right prompt, but in order to even know how to phrase it you'd need to know quite a bit of chemistry- and at that point, you could just as easily figure it out yourself with a pen and paper. That has the added upside of the DEA not kicking in your door for "just asking questions" too.
As far as calculus goes, I can imagine some of the inputs might be confusing to an AI that is not specifically trained for them since the minutiae of formatting is essential. There might be something inherent to calculus that the AI has difficulty understanding, or it might just be user error too. It's hard to say.
Edit: the other guy who responded's explanation is more correct, listen to them. My background in CS is woefully lacking, but their answer seems right based on my limited understanding of how this AI works.
fortnitefunnies3 t1_jc9cvp4 wrote
Teach me
Viewing a single comment thread. View all comments