Submitted by Outrageous_Room_3167 t3_zmqp2q in deeplearning
suflaj t1_j0de8ja wrote
Reply to comment by Outrageous_Room_3167 in I have 6x3090 looking to build a rig by Outrageous_Room_3167
There is no larger memory. NVLink only increases bandwidth by up to 300 GB/s unless there is a software implementation of memory pooling, which there isn't for any relevant DL framework.
Every week this has to be explained to yet another aspiring system integrator...
ribeirao t1_j0e5fow wrote
Not op but that’s good to know, so it would only speed the process and not make a big gpu with 24+24 gbs :(
suflaj t1_j0e646v wrote
You can always make a model parallelized model and have it on any card, not that hard. Your biggest problem is load balancing in that case, but it can be done with a bit of benchmarking and heuristics.
ribeirao t1_j0e6lkg wrote
thanks for the keyword, I’ll keep this in mind when/if I buy another 3090
Outrageous_Room_3167 OP t1_j0gopig wrote
You know typing it out, i had some doubts LOL thanks
Viewing a single comment thread. View all comments