Submitted by soupstock123 t3_106zlpz in deeplearning
VinnyVeritas t1_j3l0gqt wrote
I don't know if that's going to work well to have 16 PCIe lane, everyone here I've seen making 4 GPUs machines uses the CPUs that have 48 or 16 PCIe lanes.
Also you'll need a lot of watts to power that monster, not to mention you need a 10-20% margin if you don't want fry the PSU.
soupstock123 OP t1_j3l0q8f wrote
Yeah, that's what I've basically discovered too. The mobo with the 16 PCIe lanes isn't going to work out. Changed my build to threadripper. Any advice or suggestions for a PSU that can handle the workload?
VinnyVeritas t1_j3nh3g4 wrote
I suppose one PSU will take care of motherboard + CPU + some GPUs and the other one will take care of remaining GPUs.
So if you get 4x 3090, that's 350W x4 = 1400W just for GPUs, +300 watts for CPU, +powering the rest of the components, drives, etc... So let's say we round that up to 2000W, then add at least 10% margin, that's 2200W total.
So maybe 1600W PSU for mobo and some GPUs, and another 1000W or more for the remaining GPUs. Note, if you go with 3090TI, it's more like 450-500W per card, so you have to do the maths.
Or if you want to go future proof, just put two 1600W PSUs, and then you can just swap your 3090 with 4090 in the future and not worry about upgrading PSUs.
soupstock123 OP t1_j3nmsho wrote
I'm seeing the argument for 2 1600 PSUs. It's fine for the mining rig case frame, but it's baiscly confirming to me that this is never going to fit in a case lol.
VinnyVeritas t1_j3rrzvr wrote
Actually I've been sort of looking at ML computers (kind of of browsing and dreaming one day I would have one, but it's always going to be out of my means and needs anyway). Anyway, they can put two PSUs in a box, obviously it's made by companies, so the total cost is twice or 3 times the cost of the parts alone (e.g. building yourself would be 2-3x cheaper) but it could inspire you for picking your parts.https://bizon-tech.com/amd-ryzen-threadripper-up-to-64-cores-workstation-pc
https://shop.lambdalabs.com/gpu-workstations/vector/customize
Viewing a single comment thread. View all comments