Viewing a single comment thread. View all comments

Beneficial_Fall2518 t1_itln7t8 wrote

understand that scaling a self improving language model alone won't yield AGI. However, what about the coding capabilities language models such as GPT3 have demonstrated? Scale up a text to code model, giving it access to its own code. How long would that take to spiral into something we don't understand?

1

AdditionalPizza t1_itlqg25 wrote

I'm curious what we will see with a GPT-4 based Codex. By the sounds of engineer/CEO interviews they already know something massive is right around the corner.

5

radioOCTAVE t1_itm6hzw wrote

I’m curious! What interview(s)?

3

AdditionalPizza t1_itmatqp wrote

Here's a few:

First

Second

Third

Forth

So at some point in these, they all mention this "5 to 10 years" or so casually when they refer to AGI or transformative AI being capable of doing most jobs. There's a few more out there but these were in my recent history.

I recommend watching some videos from Dr. Alan D. Thompson for a continuous stream of some cool language model capabilities he explains. He's not a CEO or anything, but he just puts out some interesting videos.

And then there's this one here talking about AI programming. Another here, in this interview he mentions hoping people forget about GPT-3 and move on to something else. Hinting at GPT-4 maybe? Not sure.

6