Viewing a single comment thread. View all comments

IonizingKoala t1_j91jdx7 wrote

LLMs will not be getting smaller. Getting better ≠ getting smaller.

Now, will really small models be run on some RTX 6090 ti in the future? Probably. Think GPT-2. But none of the actually useful models (X-Large, XXL, 10XL, etc) will be accessible at home.

1

duboispourlhiver t1_j91k8jk wrote

I disagree

1

IonizingKoala t1_j91m923 wrote

Which part? LLM-capable hardware getting really really cheap, or useful LLMs not growing hugely in parameter size?

1

duboispourlhiver t1_j91x4ao wrote

I meant that IMHO, gpt3 level LLMs will have fewer parameters in the future.

2

IonizingKoala t1_j924sbn wrote

I see. Even at a 5x reduction in parameter size, that's still not enough to run on consumer hardware (we're talking 10b vs. 500m) , but I recognize what you're trying to say.

2