Viewing a single comment thread. View all comments

Smoke-away t1_irhfjfp wrote

I always thought it would be DeepMind, but OpenAI is getting close.

The big wildcard comes from the open-source movement led by StabilityAI (known for Stable Diffusion).

The amount of projects that have spun off from Stable Diffusion is enormous. They far outweigh the impact/reach of DALL-E 2 by OpenAI. I could see a similar thing happening with the next big large language models, like GPT-4. You could imagine a scenario where OpenAI releases GPT-4, then StabilityAI or a similar organization releases an open-source version a while later, and then the community builds a large number of projects on top of that. In this scenario one of the leaders could release a pre-AGI model and a competitor, or even an individual, would use this momentum to go beyond, if that makes any sense.

As John Carmack said on the Lex Friedman Podcast:

> It is likely that the code for artificial general intelligence is going to be tens of thousands of lines of code, not millions of lines of code. This is code that conceivably one individual could write.

As we get closer to AGI, companies will be incentivized to keep their best models private for as long as possible so they don't get leapfrogged upon release. Others take the opposite approach to try and keep these models as open and widely available as possible to try and avoid a winner-takes-all scenario.

8

MercuriusExMachina t1_irjxtx5 wrote

I agree with most of what you say, but please note that one key difference between diffusion models and language models is size, so compute costs. Diffusion models are really tiny compared to language models.

3