Viewing a single comment thread. View all comments

chatterbox272 t1_ix7mx5j wrote

>the cost/performance ratio for the 1080's seems great..

Only if your time is worthless, your ongoing running costs can be ignored, and expected lifespan is unimportant.

Multi-GPU instantly adds a significant amount of complexity that needs to be managed. It's not easy to just "hack it out" and have it work under multi-GPU, you either need to use frameworks that provide support (and make sure nothing you want to do will break that support), or you need to write it yourself. This is time and effort you have to spend that you otherwise wouldn't with a single GPU. You'll have limitations with respect to larger models, as breaking up a model over multiple GPUs (model parallelism) is way more complicated than breaking up batches (data parallelism). So models >11GB for a single element are going to be impractical.

You'll have reduced throughput unless you have a server, since even HEDT platforms are unlikely to give you 4 PCIe Gen3 x16 slots. You'll be on x8 slots at best, and most likely on x4 slots. You're going to be pinned to much higher end parts here, spending more on the motherboard/cpu than you would need to for a single 3090.

It's also inefficient as all buggery. The 3090 has a TDP of 350W, the 1080Ti has 250W. That means for the same compute you're drawing roughly (TDP is a reasonable but imperfect stand in for true power draw) 3x the power for that compute. That will drastically increase the running cost of the system. Also a more expensive power supply and possibly even needing to upgrade the wall socket to allow you to draw that much power (4 1080Ti to me means a 1500W PSU minimum, which would require a special 15A socket in Australia where I live).

You're also buying cards that are minimum 3 years old. They have seen some amount of use, and use in a time where GPU mining was a big deal (so many of the cards out there were pushed hard doing that). The longer a GPU has been out of your possession, the less you can rely on how well it was kept. The older arch will also be sooner dropped for support. Kepler was discontinued last year, so we have Maxwell and then Pascal (where the 10 series lies). Probably a while away, but a good bit sooner than Ampere (which has to wait through Maxwell, Pascal, Volta, and Turing before it hits the chopping block).

TL; DR:
Pros: Possibly slightly cheaper upfront
Cons: Requires more expensive hardware to run, higher running cost, shorter expected lifespan, added multi-GPU complexity, may not actually be compatible with your wall power.

TL; DR was TL; DR: Bad idea, don't do it.

7

Nerveregenerator OP t1_ix92czu wrote

Ok, thanks I think that clears up the drawbacks. I’d have to check which motherboard I’m using now, but generally would you expect a 3090 to be compatible with the motherboard that works with a 1080ti? Thanks

1

incrediblediy t1_ix9xbce wrote

if your CPU/Motherboard support PCIe 4.0 16x slot, that is all needed for a RTX3090. I have a 5600x with cheap B550M-DS3H motherboard running RTX3090 + RTX3060. I also got an used RTX3090 from ebay after decline of mining. Just make sure your PSU can support it, draws 370 W at max.

2

Nerveregenerator OP t1_ixadbam wrote

thanks for the thoughtful feedback! Also lmk if you have any feedback on the pip package idea above that I added to the post!

1

incrediblediy t1_ixayv0c wrote

Sorry I don't know any package for bench marking, if you find any I can run it and tell you the results though if needed, but only use Win 10 Pro for training if that matters.

1

Nerveregenerator OP t1_ixb60cu wrote

oh, i just was thinking it could be useful (possibly making one myself) as I feel like this is a common issue for people.

1