1II1I11II1I1I111I1

1II1I11II1I1I111I1 t1_jeea3wq wrote

>the truth is no one knows how close we really are to it, or if we are even on the right path at all yet.

Watch this interview with Ilya Sutskever. He seems pretty confident about the future, and the obstacles between here and AGI. The inside knowledge at OpenAI definitely knows how close we are to AGI, and scaling LLMs is no longer outside the realm of feasibility to achieve it.

3

1II1I11II1I1I111I1 t1_jee9x9f wrote

Watch this interview with Ilya Sutskever if you get the chance. The chief engineer (the brains) of OpenAI. If you read between the lines, or even take what he says at face value, it seems to him like there are very few hurdles between the paradigm of scaling LLMs and achieving AGI. We're very clearly on track, and very clearly the pace is only increasing. Unless regulation slows down AGI, it's most likely here before 2030.

5

1II1I11II1I1I111I1 t1_je0g9bo wrote

Would you say the Microsoft paper from LESS THAN TWO WEEKS AGO saying early forms of AGI can be observed in GPT-4 isn't the "thoughts of professionals and academics"?

All an AGI needs to be able to do is build another AI. The whole point is that ASI comes very soon after AGI.

4

1II1I11II1I1I111I1 t1_je0fjf3 wrote

Twitter (Takes a while to curate your feed, but you get the freshest information there, as well as quality informed content if you follow the right people i.e. academics and researchers)

r/singularity (the rest of Reddit is far too behind talking about AI; r/ChatGPT can have good content amongst all the garbage)

YouTube (AI Explained, Firecode, Robert Miles. Content is very quickly outdated though)

Less Wrong

Hacker News

I actually think people on HN are pretty informed on the rate of change in AI. The recent post about a 3D artist becoming disillusioned with their work after being 'replaced' with GPT had a lot of comments clearly discussing the immediate and massive impact AI will have on society.

7

1II1I11II1I1I111I1 t1_je0drvf wrote

Bruh...

The goalposts for AGI are continually moved by people who want to remain ignorant.

Transformative technology is literally already here. Within a year GPT-4 will be involved in most peoples' personal or professional lives. Now realise that the technology is only improving (faster than predicted)

Would anyone hire you over GPT-4? How about GPT-5? What about GPT-6 with internet access, and full access and memorization of your companies database.

19

1II1I11II1I1I111I1 t1_jdxrjvi wrote

If you step out of the hypothetical realm, you can see containment is already impossible. GPT-4 was attached to the internet within 10 days of being created, and a norm has certainly been established.

Theoretically it might make some sense to aim for containment (Yudkowsky's AI box experiment would prove otherwise). But in the world we live in, containment is no longer an option.

3

1II1I11II1I1I111I1 t1_jdws8u9 wrote

Yep, agreed.

The reason I don't worry too much about hallucinations and truthfullness is because Ilya Sutskever (OpenAI) says it's very likely to be solved in the 'nearer future'; current limitations are just current limitations. Exactly like the limitations of 2 years ago, we will look back at this moment as just another minor development hurdle.

Edit: Yep, suss this tweet https://twitter.com/ciphergoth/status/1638955427668033536?s=20 People just confidently said "don't connect it to the internet and it won't be a problem'. We've been dazzled by current changes and now such a fundamental defence has been bypassed because? Convenience? Optimism? Blind faith?

2