JimIad t1_jc4j03x wrote
What do you think about ChatGPT detectors that try to tell whether someone has used it to generate text or not? Can they be relied upon? What are your thoughts on whether/how organisations and institutions should try to combat its use?
unemployedprofessors OP t1_jc4lu9b wrote
So I have more to say about how to combat it, and I don't want to rush that response. But here's my answer to your first 2 questions:
Right now, those detectors are absolute trash. Only fools would rely on them and I cringe every time I see a post on Reddit claiming someone has been falsely accused of using them.
But they're getting better. I don't think TurnItIn (itself notable for a lot of false positives) is going to squander that profit opportunity, and as I posted in the r/unemployedprofs subreddit a few weeks ago, TurnItIn is already giving MVP demos of its AI detector to educators.
I also think that humans are becoming quick to recognize AI-generated content.
Especially the humans who care about words and writing and do a lot (or even just a little) reading - I think it was u/ramsesthepigeon who mentioned that its style has become recognizable.
ChatGPT has been out for what, 90 days? 100? By this point, its writing style (or lack thereof) is practically a meme. Like pornography, people know AI writing when they see it. So I think that very, very quickly, humans who have to assess a lot of written content will get better at identifying it, and the detectors will get better, before the AI generator game itself gets better and then we'll be in another round of this AI-vs-humans game for a little while....but even if the tech iterates before the detection tech, I think that people who've learned to identify ChatGPT writing will bring not just their skills in identifying it, but also their (potentially, by that point, reactionary) suspicion to what they read, which will make identifying it easier - even if it is also potentially a minefield of false accusations.
Viewing a single comment thread. View all comments