Viewing a single comment thread. View all comments

IonizingKoala t1_j91m923 wrote

Which part? LLM-capable hardware getting really really cheap, or useful LLMs not growing hugely in parameter size?

1

duboispourlhiver t1_j91x4ao wrote

I meant that IMHO, gpt3 level LLMs will have fewer parameters in the future.

2

IonizingKoala t1_j924sbn wrote

I see. Even at a 5x reduction in parameter size, that's still not enough to run on consumer hardware (we're talking 10b vs. 500m) , but I recognize what you're trying to say.

2