Submitted by Dan60093 t3_10came3 in singularity
If ChatGPT can generate code from simple prompts, then what's stopping OpenAI from setting up a positive coding feedback loop for it to work on its own fork of itself?
I understand that the code it generates is usually pretty simple and not always correct, but I feel like it's correct enough of the time that it could find the errors in them itself with additional "check this code before implementing" prompts from itself. I also understand that it's probably quite a bit more complicated than I'm realizing, but if even OpenAI's own team is using GPT as a coding assistant then surely there has to be a way to cut out the middle man with some finagling?
Beyond that, even if it couldn't do what I'm describing there must surely be some perfectly worded prompt out there that could get it to analyze its own hardware/software and come up with a running list of improvements that could be made and ways to go about doing that.
This is all assuming only ChatGPT's capabilities, too - if even ChatGPT could maybe probably do it, why on Earth wouldn't that be in place with GPT-4? They obviously have a working demo of it that's blowing investor's little slimy billionaire minds out of the water enough to secure funding without even having made any profit from ChatGPT, so it must be operational enough for simple code revisions and improvements.
I'll come right out and say it: why isn't ChatGPT the seed for a proto-AGI?
2bdb2 t1_j4f1d7p wrote
> If ChatGPT can generate code from simple prompts, then what's stopping OpenAI from setting up a positive coding feedback loop for it to work on its own fork of itself? > > I'll come right out and say it: why isn't ChatGPT the seed for a proto-AGI?
Being generous, the code written by ChatGPT is at best at the level of a mediocre first year IT student. It can write simple boilerplate based on solutions it's already seen, but has limited ability to actually solve complex problems.
This is still an incredibly impressive achievement and it blows my mind every time I see it in action. But it's about as likely to make the next major breakthrough in AI research as our imaginary mediocre first year IT student is.
It's hard not to imagine a point where AI is able to improve itself faster than humans can, thus essentially writing the next version of itself. But we're not there yet.