Submitted by ronaldxd2 t3_xwfaft in deeplearning
Knurpel t1_ir90nkx wrote
Assuming that your deep learning stack uses CUDA: Multi-GPU CUDA is not for the faint of heart, and most likely will requre intense code wrangling on your part. It's not as easy as sticking in another GPU
Your GPUs support the outgoing NVLINK, and using it would make things easier on you.
MyActualUserName99 t1_irapufa wrote
If you’re using Tensorflow, adding multiple GPUs is extremely easy. Just have to call some functions and make a strategy:
Knurpel t1_ird8vtd wrote
>extremely easy
... depends on your coding proficiency. As I said, "it's not as easy as sticking in another GPU."
GPUaccelerated t1_ireefhv wrote
The 3070ti and 3080ti do not support nvlink.
Knurpel t1_ireu5b5 wrote
Ooops. I'm only familiar w/ the 90
Viewing a single comment thread. View all comments