Submitted by ronaldxd2 t3_xwfaft in deeplearning

I have 3 gpu laying around here. 1 RTX3080TI and 2 RTX3070Ti, for image learning tasks. Worth putting all the 3 gpu to work together in single task or its better to keep only with the 3080 TI alone?

1

Comments

You must log in or register to comment.

Aromatic-Ad-2497 t1_ir6mvvw wrote

Highly depends on the task and especially how fast or accurate it needs to be.

Former miner trying to make use of spare GPU hehe?

1

ronaldxd2 OP t1_ir6rqfs wrote

Image comparison, database to recognize patterns. Accuracy next to 90%. Yeah, need to find new ways to work with my Gpus lol.

1

Aromatic-Ad-2497 t1_ir722g0 wrote

Me too brother. You have the right idea. The two 3070 will excel together with fp16, while the 3080 you’ll want to do batch stuff standalone.

1

Knurpel t1_ir90nkx wrote

Assuming that your deep learning stack uses CUDA: Multi-GPU CUDA is not for the faint of heart, and most likely will requre intense code wrangling on your part. It's not as easy as sticking in another GPU

Your GPUs support the outgoing NVLINK, and using it would make things easier on you.

https://medium.com/gpgpu/multi-gpu-programming-6768eeb42e2c

1

GPUaccelerated t1_ireeqet wrote

It’s definitely worth the test! Have some fun and play around with tensorflow. Once you have the 3 cards set up to work on the same job, test it. Compare your results to running individual jobs. I personally think they’ll do better alone but you should check it out for yourself. :) have fun!

1