Submitted by GPUaccelerated t3_yf5jm3 in deeplearning
GPUaccelerated OP t1_iuim3cp wrote
Reply to comment by ShadowStormDrift in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Very cool! But I think you mean 24GB of VRAM for the 3090?
Issues loading the web page, btw.
ShadowStormDrift t1_iuivphh wrote
GPUaccelerated OP t1_iuixkzj wrote
So cool! Good for you!
Viewing a single comment thread. View all comments