Submitted by vintergroena t3_123asbg in MachineLearning
Fit-Recognition9795 t1_jdtz20q wrote
You think we are far ahead of where we are... and I also wish we were there, but we are not.
Not saying one day will not be possible but have you tried asking gpt4 to multiply two 3 digits numbers?
vintergroena OP t1_jdtzyuo wrote
Yeah, GPT sucks on tasks which require actual thinking and personally I am kind of skeptical about it's actual usefulness tbh. But my impression is that despite being primarily built to work with natural language, it actually does work better with computer code. Probably because computer code has much simpler structure. This got me thinking that building something more specialized that would be required to only work with computer code would actually be an easier task - more similar to automated translation perhaps, which is already working pretty well using ML.
nonotan t1_jdv8hy1 wrote
I can't speak for GPT-4, but in my experience with ChatGPT, I would definitely not say it is better with code. It's just absurdly, terribly, unbelievably bad at maths. It's a bit better at dealing with code, but it doesn't mean it's good, you're just comparing it with its weakest area. It's not really capable of generating code that does anything even a little complex without heavy guidance directing it towards mistakes and getting it to make revision after revision (and even that is non-trivial to get it to do, it tends to just start generating completely different programs with completely different problems instead)
That being said, I can definitely believe it could do okay at decompilation. It's an easy enough task in general, comparatively, and the "trickiest" bit (interpreting what the program is supposed to be doing, to have the context to name variables etc) feels like the kind of thing it'd perform surprisingly well at. Getting a general "vibe" and sticking with it, and translating A to B, it tends to do okay. It's when it needs to generate entirely novel outputs that need to fulfill multiple requirements at once that it starts failing miserably.
fmfbrestel t1_jdwmb7z wrote
Most of those problems are due to the input/memory limitations for general use. I can imagine locally hosted GPTs that have training access to an organization's source code, development standards, and database data structures. Such a system could be incredibly useful. Human developers would just provide the prompts, supervise, approve, and test new/updated code.
Would have to be locally hosted, because most orgs are NOT going to feed their source code to an outside agency regardless of the promises of efficiency.
Viewing a single comment thread. View all comments