2bdb2

2bdb2 t1_j4j3pep wrote

> DeepMind's AlphaCode can certainly code better than ever most median quality developers.

AlphaCode does well at solving quiz questions. From my own experience with those types of quizzes, they're mostly just maths questions solved with code.

Doing well at those types of questions has very little bearing on most real world software engineering.

Now to be fair, machine learning is a lot more math focused than typical software engineering. But if we're going with the assertion that "AlphaCode can certainly code better than ever most median quality developers" based on doing well at quiz questions, then I'm going to disagree.

> So let's rephrase the question to focus on AlphaCode instead of ChatGPT. > How does that change your response, if at all?

Not Really.

Don't get me wrong, AlphaCode is still mind blowing. I really don't want to understate how impressive it is. But I don't think it's at the level of being able to implement itself. Yet.

(Disclaimer: I am not an AI researcher, so take my opinion with a grain of salt).

2

2bdb2 t1_j4f1d7p wrote

> If ChatGPT can generate code from simple prompts, then what's stopping OpenAI from setting up a positive coding feedback loop for it to work on its own fork of itself? > > I'll come right out and say it: why isn't ChatGPT the seed for a proto-AGI?

Being generous, the code written by ChatGPT is at best at the level of a mediocre first year IT student. It can write simple boilerplate based on solutions it's already seen, but has limited ability to actually solve complex problems.

This is still an incredibly impressive achievement and it blows my mind every time I see it in action. But it's about as likely to make the next major breakthrough in AI research as our imaginary mediocre first year IT student is.

It's hard not to imagine a point where AI is able to improve itself faster than humans can, thus essentially writing the next version of itself. But we're not there yet.

25