Submitted by soupstock123 t3_106zlpz in deeplearning
soupstock123 OP t1_j3l2srl wrote
Reply to comment by hjups22 in Building a 4x 3090 machine learning machine. Would love some feedback on my build. by soupstock123
Right now mostly CNNs, RNNs, and playing around with style transfers with GANs. Future plans include running computer vision models trained on videos and testing inferencing, but still researching how demanding that would be.
hjups22 t1_j3l3ln2 wrote
Those are all going to be pretty small models (under 200M parameters), so what I said probably won't apply to you. Although, I would still recommend parallel training rather than trying to link them together (4 GPUs means you can run 4 experiments in parallel - or 8 if you double up on a single GPU).
Regarding RAM speed, it has an effect, but it probably won't be all that significant given your planned workload. I recently changed the memory on one of my nodes so that it could train GPT-J (reduced the RAM speed so that I could increase the capacity), the speed difference for other tasks is probably within 5%, which I don't think matters (when you expect to run month long experiments, an extra day is irrelevant).
Viewing a single comment thread. View all comments