Submitted by Neurogence t3_114pynd in singularity
duboispourlhiver t1_j90jyl8 wrote
Reply to comment by IonizingKoala in Microsoft Killed Bing by Neurogence
True. Progress in AI is even more impressive than Moores law was, so maybe it will run at home because of progress on LLM and not progress on microelectronics
IonizingKoala t1_j91jdx7 wrote
LLMs will not be getting smaller. Getting better ≠ getting smaller.
Now, will really small models be run on some RTX 6090 ti in the future? Probably. Think GPT-2. But none of the actually useful models (X-Large, XXL, 10XL, etc) will be accessible at home.
duboispourlhiver t1_j91k8jk wrote
I disagree
IonizingKoala t1_j91m923 wrote
Which part? LLM-capable hardware getting really really cheap, or useful LLMs not growing hugely in parameter size?
duboispourlhiver t1_j91x4ao wrote
I meant that IMHO, gpt3 level LLMs will have fewer parameters in the future.
IonizingKoala t1_j924sbn wrote
I see. Even at a 5x reduction in parameter size, that's still not enough to run on consumer hardware (we're talking 10b vs. 500m) , but I recognize what you're trying to say.
Viewing a single comment thread. View all comments