Submitted by N3urAlgorithm t3_1115h5o in deeplearning
Hey everyone, I'm going to build a new workstation for work and I woul like to have some help weighting the pros/cons of the different GPUs that as fairly new there aren't much information online.
I was thinking between 4x RTX 6000 ADA or 1 Hopper h100. The general idea would be that one of train various deep learning model, principally vision transformers and build some kind of service over it. What about the nvlink? The cloud option is not considered at the moment due to the recent bills.
Any suggestions or clarification is highly appreciated.
Zeratas t1_j8cwg7l wrote
You're not going to be putting in an H100, and a workstation. That's a server card.
With the GPUs you were mentioning, are you prepared to spend 30 to 50 thousand dollars just on the GPUs?
IIRC, the A6000s are the top of the line desktop cards.
IMHO, take a look at the specs, performance in your own workload. You'd get better value doing something like one or two A6000s, and maybe investing in a longer term server-based solution.