Viewing a single comment thread. View all comments

Knurpel t1_ir90nkx wrote

Assuming that your deep learning stack uses CUDA: Multi-GPU CUDA is not for the faint of heart, and most likely will requre intense code wrangling on your part. It's not as easy as sticking in another GPU

Your GPUs support the outgoing NVLINK, and using it would make things easier on you.

https://medium.com/gpgpu/multi-gpu-programming-6768eeb42e2c

1