Submitted by et_tu_brutits t3_xt7vcn in MachineLearning
Friends,
Would appreciate some insight/guidance in choosing the optimal GPU for general training purposes against some constraints I won't delve into at much detail.
I run a bare metal hypervisor on a Dell R820 and plan to perform GPU passthrough and have some constraints which restrict me to either a 3060 RTX or 2060 RTX. Cost isn't an issue
Card | Memory | Tensor Cores | Cuda Cores | Core | Boost |
---|---|---|---|---|---|
2060 RTX | 12GB | 240 | 1920 | 1365mhz | 1680mhz |
3060 RTX | 12GB | 112 | 3584 | 1320mhz | 1780mhz |
Considerations:
-
2060 has more tensor cores, however 3060 Ampere represents 50% faster per tensor core operations than Turing. For tensor cores, including clock speeds, I think the 2060 slightly has the edge or might be equivalent?
-
The 3060 clearly wins with CUDA cores
I'm likely turd polishing, however I am leaning towards the 3060 on account of longer term support for libraries. I also don't have experience with either card, so don't know if the additional 3060 CUDA cores will make a major difference in Tensorflow/PyTorch.
What's your recommendation to maximize value and future reuse for general purpose training? Thank you in advance and have a splendid weekend.
Crazy-Space5384 t1_iqoozdm wrote
I’d take the 2060 only if I found a particularly good deal - otherwise it’s the 3060.