MrFlamingQueen
MrFlamingQueen t1_je0w3ut wrote
Reply to comment by cegras in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Not sure on the training corpus, but like you mentioned, there's ton of other forms of textbooks and solution manuals to textbook problems on things like github, stackexchange, etc.
MrFlamingQueen t1_je0j29h wrote
Reply to comment by cegras in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
It feels like majority of the people in this discussion have no idea what computer science is and what LeetCode tests.
As you mentioned, there are hundreds of websites devoted to teaching the leetcode design patterns and entire books devoted to learning and practicing these problems.
MrFlamingQueen t1_jdnmkby wrote
Reply to comment by drinkingsomuchcoffee in [D] Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
🤫🤫 Shhhhh, this is my research area.
MrFlamingQueen t1_j1wp6pv wrote
Reply to comment by j03ch1p in [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
Yes, that was AI written as a cheeky way of demonstrating it can be recognizable after having a writing sample of mine in the previous post.
MrFlamingQueen t1_j1vskd9 wrote
Reply to comment by respeckKnuckles in [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
Thank you for your response. You are correct that it may be easier to distinguish between the work of an A-student and AI-generated text. However, it is possible that professors can still differentiate between AI-generated text and the work of a B-earning or C-earning student, even if it is more difficult. This is because professors are trained to evaluate the quality and originality of student work, and may be able to identify certain characteristics or patterns that suggest the work was generated by an AI.
As for the tools that I mentioned, it is possible that they may also be able to differentiate between AI-generated text and human-written text to some degree. These tools use advanced machine learning algorithms to analyze text and identify patterns or characteristics that are indicative of AI-generated text. While they may not be able to reliably distinguish between AI-generated text and human-written text in all cases, they can still be useful for identifying potentially suspect text and alerting professors to the possibility that it may have been generated by an AI. Overall, it is important for professors to remain vigilant and use their expertise and judgement to evaluate the quality and originality of student work.
MrFlamingQueen t1_j1vmykp wrote
Reply to comment by respeckKnuckles in [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
They're not worried because on some level, it is recognizable, especially if you have a writing sample from the student.
On the other hand, there are already tools that can detect it, by comparing the sequences to the model's internal weights.
MrFlamingQueen t1_je3kywp wrote
Reply to comment by TheEdes in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Agreed. It's very likely contamination. Even "new" LeetCode problems existed before they were published on the website.