Submitted by Klaud-Boi t3_127x67n in singularity
jugalator t1_jeh12mh wrote
GPT-3 was released three years ago and it took another three years for GPT-4 so maybe yet another three years. It feels like advancements have been super quick, mere months, but this is not true. They just happened to make the ChatGPT site with conversation tuning soon before GPT-4, but GPT 3 is not "new".
I don't expect some sort of exponential speed here. They're already running into hardware road blocks with GPT-4 and currently probably have their hands full trying to accomplish a GPT-4 Turbo since this is a quite desperate situation. As for exponentials, it looks like resource demand increases exponentially too...
Then there is the political situation as AI awareness is striking. For any progress there needs to be very real financial motives (preferably not overly high running costs) and low political risks. Is that what the horizon looks like today?
Also, there is the question when diminishing returns hit LLM's of this kind. If we're looking at 10x costs once more for a 20% improvement it's probably not going to be deemed justified and rather trying to innovate in the field of exactly how much you can do given a certain parameter size? The Stanford dudes kind of opened some eyes there.
My guess is that the next major advancement will share roughly GPT-4 size.
SkyeandJett t1_jeh3k1n wrote
You should watch the Ilya interview. He's confident there's still plenty of room for growth with just text but the real advancements will be multi-modal training data. I'd also take a look at Cerebras hardware. There's plenty of room for advancement with training hardware as well. We've got a LOT of runway ahead before hitting any real blocks and by then I'm 100% sure we'll have already hit self-improving AGI.
Viewing a single comment thread. View all comments