Submitted by jsonathan t3_106q6m9 in MachineLearning
jsonathan OP t1_j3i0txg wrote
Reply to comment by uoftsuxalot in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
Yeah, right now it’s just a thin wrapper around GPT-3, but there’s a lot that could be done to improve it, like using static code analysis to build a better prompt or even training a more specialized model (like this).
2Punx2Furious t1_j3l26ui wrote
Even fine-tuning the prompt could get much better results. Prompt engineering is important.
datamakesmydickhard t1_j3o73d6 wrote
Has it really come to this
2Punx2Furious t1_j3o7hps wrote
Yes, it's been like this for a while now.
ginger_beer_m t1_j3jlhaj wrote
How did you deal with incorrectness from ChatGPT?
jsonathan OP t1_j3joesx wrote
I didn't. Adrenaline won’t always correctly fix your error, but it can at least give you a starting point.
kelkulus t1_j3k8w2w wrote
Well for one, he's not using ChatGPT. GPT-3 is not the same.
danielswrath t1_j3l1fkb wrote
GPT-3 has the same problem though. ChatGPT is a successor of GPT-3, so it's not the same but it's not extremely different either.
Glum-Bookkeeper1836 t1_j3lhgfx wrote
I'm not sure if we know this for certain, but it appears to be davinci instruct 3 with a custom prompt prefix.
[deleted] t1_j3l3f45 wrote
[deleted]
cloudedleopard42 t1_j3p7pr3 wrote
is it possible to fine tune GPT for static code analysis ? if yes...what would be the training set looks like?
Viewing a single comment thread. View all comments