N3urAlgorithm
N3urAlgorithm OP t1_j8dqtv4 wrote
Reply to comment by CKtalon in GPU comparisons: RTX 6000 ADA vs Hopper h100 by N3urAlgorithm
Thank you, the TMA is actually a big deal to speed up things but as far as i've found even if the 4x rtx has more vram it can't be used for memory pooling. But basically if i'm not wrong even with this limitation I can still distribute training along the 4 gpus but still for a maximum of 48gb.
N3urAlgorithm OP t1_j8dokrh wrote
Reply to comment by lambda_matt in GPU comparisons: RTX 6000 ADA vs Hopper h100 by N3urAlgorithm
So basically rtx6k does not support shared memory and so the stack of ada rtx will only be useful to accelerate things, isn't it?
For the h100 instead is it possible to do something like that?
Is the price difference of 7.5k for the 6000 wrt 30k for the h100 legit?
N3urAlgorithm OP t1_j8cwwcr wrote
Reply to comment by Zeratas in GPU comparisons: RTX 6000 ADA vs Hopper h100 by N3urAlgorithm
Yes due to the fact I'm going to use it for work, it'll be ok to build a server option
N3urAlgorithm OP t1_j8drfq9 wrote
Reply to comment by lambda_matt in GPU comparisons: RTX 6000 ADA vs Hopper h100 by N3urAlgorithm
you said that nvidia has killed of the dgx workstation but as I can see from here there's still something for h100?