Viewing a single comment thread. View all comments

nonotan t1_jdv8hy1 wrote

I can't speak for GPT-4, but in my experience with ChatGPT, I would definitely not say it is better with code. It's just absurdly, terribly, unbelievably bad at maths. It's a bit better at dealing with code, but it doesn't mean it's good, you're just comparing it with its weakest area. It's not really capable of generating code that does anything even a little complex without heavy guidance directing it towards mistakes and getting it to make revision after revision (and even that is non-trivial to get it to do, it tends to just start generating completely different programs with completely different problems instead)

That being said, I can definitely believe it could do okay at decompilation. It's an easy enough task in general, comparatively, and the "trickiest" bit (interpreting what the program is supposed to be doing, to have the context to name variables etc) feels like the kind of thing it'd perform surprisingly well at. Getting a general "vibe" and sticking with it, and translating A to B, it tends to do okay. It's when it needs to generate entirely novel outputs that need to fulfill multiple requirements at once that it starts failing miserably.

2

fmfbrestel t1_jdwmb7z wrote

Most of those problems are due to the input/memory limitations for general use. I can imagine locally hosted GPTs that have training access to an organization's source code, development standards, and database data structures. Such a system could be incredibly useful. Human developers would just provide the prompts, supervise, approve, and test new/updated code.

Would have to be locally hosted, because most orgs are NOT going to feed their source code to an outside agency regardless of the promises of efficiency.

2