Viewing a single comment thread. View all comments

PhoneAcc2 t1_j6recky wrote

The article suggests there is a single "success" metric in OpenAIs publication, which there is not and deliberately so.

Labeling text as AI generated will always be fuzzy (active watermarking aside) and become even harder as models improve and get bigger. There is simply an area where human and AI written texts overlap.

Have a look at the FAQ on their page if you're interested in the details: https://platform.openai.com/ai-text-classifier

4

IKetoth t1_j6rfyq3 wrote

No need, I don't see a point to this, I expect given 3-5 years of adversarial training if left unregulated they'll be completely impossible to tell apart to a level where there'd be any point to it, we need to learn to adapt to the fact AI writing is poised to replace human writing in anything not requiring logical reasoning

Edit: I'd add that we need to start thinking as a species about the fact we've reached the point where human labour need not apply, there are now automatic ways to do nearly everything, the only thing stopping us is the will to use them and resources being concentrated rather than distributed, assuming plentiful resources nearly everything CAN be done without human intervention.

3