Submitted by joossss t3_107c95i in MachineLearning
[removed]
Submitted by joossss t3_107c95i in MachineLearning
[removed]
Thanks! The newest Threadrippers are still based on Zen 3. So, they don't support AVX512. Would definitely like to go with A100s, but we don't have the budget for that.
What made you decide to run an on-prem server instead of going to the cloud? I'm a data science manager and I'm currently looking at our options. I like self-hosting for most things, but I'm up in the air about training deep learning models.
Cloud is almost always better imo. At the small scale you can prototype quicker and spend less time messing with hardware by using cloud services. Once you actually need to scale your product then using a cloud solution makes it really easy. The "but its cheaper" argument gets less and less valid every year, and it often doesn't account for the time and effort spent setting up a local cluster.
If u use ray u can setup a gpu cluster in less than 30 min
I think Ray is great! But Ray will not click your GPUs into a motherboard, install linux on all the machines, setup nvidia-docker, power cycle if there are issues, periodically clear up space on hdds, etc. Its the non-software part of cluster management that ends up being the most annoying and time consuming.
I have always felt like the network/security and integration with internal it systems was worse than the physical maintenance. Like people should expect that they have to invest time into integrating into a on-prem data center environment or physical maintenance stuff. I think small teams are benefited by a small gpu cluster with a fixed budget over large cloud gpu training costs. Mid-large companies do better with cloud than on-prem bc they can have better separation of environments but they cost more.
The main reason for going to the cloud for us is that we are a research institution so, our funding is project-based meaning we have to use the funding in the allotted time and the second reason is that we already have the GPUs so the time it takes to pay itself off is faster.
As a recommendation I learned from a past job, use slurm or a similar program to make turns on the use of the gpu so you don’t end up dropping each other’s models
Thanks for the info! Was thinking on how to do that.
Are you looking to do distributed training across machines? Otherwise the NIC seems complete overkill.
Only this server is planned. I just went with the recommendation from NVIDIA's website, which stated 100 Gbps per A100, but I guess it makes more sense now that I think of distributed training. What NIC speed seems enough in that case?
10Gbps is more than sufficient, data loading from the internet is not the bottleneck. Most likely you'll have the data already stored on the machine itself. Btw why did you remove the post?
Yeah true and thanks :)
I did not remove it. Was removed by the moderators for some reason.
[deleted]
[deleted]
Weary-Marionberry-15 t1_j3li7ao wrote
I don’t think this looks bad at all. I would probably push for A100 80gb gpu’s instead and the latest gen 64-core threadripper.