Submitted by Tea_Pearce t3_10aq9id in MachineLearning
currentscurrents t1_j490rvn wrote
Reply to comment by BarockMoebelSecond in [D] Bitter lesson 2.0? by Tea_Pearce
It's meaningful right now because there's a threshold where LLMs become awesome, but getting there requires expensive specialized GPUs.
I'm hoping in a few years consumer GPUs will have 80GB of VRAM or whatever and we'll be able to run them locally. While datacenters will still have more compute, it won't matter as much since there's a limit where larger models would require more training data than exists.
Viewing a single comment thread. View all comments