Submitted by head_robotics t3_1172jrs in MachineLearning
catch23 t1_j9cd5tw wrote
Reply to comment by EuphoricPenguin22 in [D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM by head_robotics
it does look to be 20-100x slower for those huge models, but still bearable if you're the only user on the machine. Still better than nothing if you don't have lots of GPU memory.
EuphoricPenguin22 t1_j9ceqy4 wrote
Yeah, and DDR4 DIMMs are fairly inexpensive as compared to upgrading a GPU for more VRAM.
Viewing a single comment thread. View all comments