Submitted by redbullkongen t3_11we9g2 in singularity
Siddhanta101 t1_jcxl1xb wrote
"Off Until Upper Grades" Isn't that still the same case?
It would be like children using ChatGPT to write a sentence when they don't even know how to write a complete sentence. đ
User1539 t1_jcxyqcn wrote
Yeah, I think the teachers won this argument.
I can't imagine a world where they allow GPT to write essays for them either.
My daughter has already had 'practicals' in her science class in middle school, and it's basically a 15 minute conversation about the subject so the teacher can assess if you're getting the material and not just memorizing the book.
I think we're just going to have to do more of that, and less rote testing. We'll have more short essays written in class and things like that.
I know people who teach online for university, and they say they wouldn't trust an online degree. They know their kids are cheating, but if you can't make them sit in front of you to take tests, there's no way to know.
SirEblingMis t1_jcy9n6e wrote
User1539 t1_jcyase7 wrote
Did you read the article, though?
"quote from content created by ChatGPT in their essays"
They're allowed to use it as a source, not to write an entire essay.
SirEblingMis t1_jcybhf3 wrote
Yes, but that's still wild to me since chatgpt can make shit up and itself won't cite where it came from. It is a language model based on internet data.
Where it gets the data for what they'll cite is the issue, and something I can imagine as presenting a problem.
When we read other papers or articles, there's always a Bibliography you can use to go check out what they based their thoughts on.
User1539 t1_jcyh91j wrote
I think it can cite sources if you ask it to, or at least it can find supporting data to back up its claims.
That said, my personal experience with ChatGPT was like working with a student who's highly motivated and very fast, but only copying off other people's work without any real understanding.
So, for instance, I'd ask it to code something ... and the code would compile and be 90% right, but Chat GPT would confidently state 'I'm opening port 80', even though the code was clearly opening port 8080, which is extremely common in example code.
So, you could tell it was copying a common pattern, without really understanding what it was doing.
It's still useful, but it's not 'intelligent', so yeah ... you'd better check those sources before you believe anything ChatGPT says.
ErikaFoxelot t1_jcyjuvn wrote
GPT4 is a little better about this, but where it excels the most is when used as a partner, rather than a replacement. You still have to know what you're doing to effectively use what it gives you.
User1539 t1_jcz0uft wrote
Yeah, I've definitely found that in coding. It does work at the level of a very fast and reasonably competent junior coder. But, it doesn't 'understand' what it's doing, like it's just copying what looks right off stack overflow and gluing it all together.
Which, if I need a straight forward function written might be useful, but it's not going to design applications you'd want to work with in its current state.
Of course, in a few weeks we'll be talking about GPT5 and who even knows what that'll look like?
magnets-are-magic t1_jczs8oe wrote
It makes up sources even when you explicitly tell it not to. Iâve tried a variety of approaches and itâs unavoidable in my experience. It will make up authors, book/article/paper titles, dates, statistics, content, etc - it will make all of them up and will confidently tell you that theyâre real and accurate.
User1539 t1_jczslss wrote
yeah, that reminds me of when it confidently told me what the code it produced did ... but it wasn't right.
it's kind of weird when you can't say 'No, can't you read what you just produced? That's not what that does at all!'
visarga t1_jd0akyj wrote
This is an artefact of RLHF. The model comes out well calibrated after pre-training, but the final stage of training breaks that calibration.
https://i.imgur.com/zlXRnB6.png
Explained by one of the lead authors of GPT4, Ilya Sutskever - https://www.youtube.com/watch?v=SjhIlw3Iffs&t=1072s
Ilya invites us to "find out" if we can quickly surpass the hallucination phase, maybe this year we will see his work pan out.
magnets-are-magic t1_jd1a58w wrote
Super interesting, thanks for sharing!
Ricky_Rollin t1_jczr3hg wrote
In many ways, itâs just advanced google. I am in a specialty field and have published some thing that was repeated word for word as I wrote it when I asked CGPT about the topic.
User1539 t1_jczs306 wrote
Yeah, in how people can use it, that's definitely a good description and I've been asking google straight up questions for years already.
I do think it's changing the game for a lot of things, like how customer service bots are going to be actually good now.
SmoothPlastic9 t1_jcyif3e wrote
BingGPT does cite the source
ground__contro1 t1_jd17jet wrote
Btw itâs a terrible source. It can easily be wrong about established facts. Last week it tried to tell me Thomas Digges posited the existence of alien life. Digges is a pretty early astronomer when the church was dominant so that really surprised me. When I questioned it again, it âcorrectedâ itself and apologized⌠which, great, but if I hadnât already known enough about Digges to be suspicious, I would have accepted it in the list of all the other (correct) information.
Chatgpt is awesome, but itâs no more a source than Wikipedia, in fact itâs potentially worse because you donât have anyone fact checking what chatgpt says to you in real time, whereas there is a chance others will have corrected wiki pages by the time you read them.
User1539 t1_jd2la65 wrote
oh, yeah, I've played with it for coding and it told me it did things it did not do, and couldn't read the code it produced after, so there's no good way to 'correct' it.
It spits out lots of 'work', but it's not always accurate and people who are used to computers always being correct are going to have to get used to the fact that this is really more like having a personal assistant.
Sure, they're reasonably bright and eager, but sometimes wrong.
I don't think GPT is leading directly to AGI, or anything, but a tool like this, even when sometimes wrong, is still going to be an extremely powerful tool.
When you see GPT passing law exams and things like that, you can see it's not getting perfect scores, but it's still probably more likely to get you the right example of case law than a first year paralegal, and it does it instantly.
Also, in 4 months, it's basically become accurate the way you'd expect a human to improve on things like the bar exam in 4 years of study.
It's a different kind of computing platform, and people don't know quite how to take it yet. Especially people used to the idea that computers never make mistakes.
Alex_2259 t1_jcydrib wrote
Good thing I got my online degree (it was mostly essay and project based as opposed to purely exam based) before Mr. GPT existed.
But now using the essay and project workaround for online classes to see if a student is avoiding the work doesn't do the trick in the era of Mr. G
milosh_the_spicy t1_jcylfhf wrote
For essays the assignment could be to have chat gpt respond to a prompt, and then completing a critical analysis?
Catenane t1_jcxvnpz wrote
Emoji input chatgpt ftw
fabulousfang t1_jcxqpm3 wrote
I mean if children CAN use chatgpt to form ful sentences when they aren't expected to form full sentences on their own is over achieving, isn't it? I'd agree school system not giving them credit for it but as a potential parent id full on commend them.
ToHallowMySleep t1_jcy7o7q wrote
Judging by your reply you need some help to form full sentences already ;)
Using a tool to achieve a result sidesteps the question of competency, and that is what we measure in class. Not "can I ask someone else to do something for me".
Of course, classes need to change to accommodate tools available to everyone (and not favour just those with privileged access), but only the use of those tools to aid your own understanding, not to replace what you do. Riding 100m on a bike is not the same as running it. Performing multiplication manually is not the same as doing it on a calculator (if that aptitude is what is being tested). Handing in an essay that was written for you does not test your comprehension or knowledge of the subject matter.
Viewing a single comment thread. View all comments