Submitted by N3urAlgorithm t3_1115h5o in deeplearning
lambda_matt t1_j8db78v wrote
No more NVLink on the-cards-formerly-known-as-quadro, so if your models are VRAM hungry you may be constrained by the ada6ks. PCIe 5 and Genoa/Sapphire Rapids might even this out, but I am not on the product development side of things and am not fully up to speed on next-gen and there have been lots of delays on the cpu/motherboards.
Also, the TDPs for pretty much all of the Ada cards are massive and will make multi-gpu configurations difficult and likely limited to 2x.
NVIDIA has killed off the the dgx workstation so they are pretty committed to keeping the H100s a server platform.
There still isn’t much real world info, as there are very few of any of these cards in the wild.
Here are some benchmarks for the H100 at least https://lambdalabs.com/gpu-benchmarks And are useful for comparing to to Ampere-gen.
Disclaimer: I work for Lambda
N3urAlgorithm OP t1_j8dokrh wrote
So basically rtx6k does not support shared memory and so the stack of ada rtx will only be useful to accelerate things, isn't it?
For the h100 instead is it possible to do something like that?
Is the price difference of 7.5k for the 6000 wrt 30k for the h100 legit?
lambda_matt t1_j8facir wrote
Short answer is, it’s complicated. Some workloads can handle being distributed across slower memory busses.
Frameworks have also implemented strategies for doing single node distributed training https://pytorch.org/tutorials/beginner/dist_overview.html
N3urAlgorithm OP t1_j8drfq9 wrote
you said that nvidia has killed of the dgx workstation but as I can see from here there's still something for h100?
lambda_matt t1_j8estga wrote
That’s a server. The DGX station was a downclocked v/a100 based workstation
https://images.nvidia.com/aem-dam/Solutions/Data-Center/nvidia-dgx-station-a100-infographic.pdf
lambda_matt t1_j8oozo1 wrote
https://lambdalabs.com/gpu-benchmarks
Now has rtx Ada 6k
Viewing a single comment thread. View all comments