Trotskyist

Trotskyist t1_jdt8tx6 wrote

Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42

It's still an extremely useful tool if you accept its limitations, and I think it's being reductive to say it can only solve "dumb" problems or suggest boilerplate code.

I used GPT-4 the other day to refactor/optimize an extremely bespoke and fairly complicated geoprocessing script that we use at work that was written by a former employee who's no longer with the organization. Yes, it got some things wrong that had to be corrected (sometimes all it took was feeding it a stacktrace, other times this wasn't enough and I'd have to figure out the issue myself)

But at the end of the day (literally, this was over the course of an afternoon,) I'd managed to cut the runtime by more than half, using libraries I'd never before touched and wasn't previously familiar with. It probably would have taken a week to implement otherwise.

11

Trotskyist t1_j74x4um wrote

>Also, software to recognize AI generated content is already being made and I'm sure schools will implement a submit system that verifies their students work.

I wouldn't be so sure. As soon as an algorithm is created to detect AI content that exact same model can and will be used to further train the neural network to avoid detection. This is the basic premise behind generative adversarial networks (or GANs,) one of the bigger ML techniques.

2