DevarshTare
DevarshTare OP t1_j9b4u7s wrote
Reply to comment by TruthAndDiscipline in [D] What matters while running models? by DevarshTare
Thats interesting. I was considering that purchase since it makes sense to run larger datasets or models on the rtx 3060. But the Tensor cores were significantly lower. The GPU would definitely run much larger models but at a lower speed I assume?
How has your experience been with larger models? Especially video or image based models ?
DevarshTare OP t1_j9a7h2p wrote
Reply to comment by TruthAndDiscipline in [D] What matters while running models? by DevarshTare
Thanks a lot!
DevarshTare OP t1_j9a7gap wrote
Reply to comment by ggf31416 in [D] What matters while running models? by DevarshTare
Appreciate it! This gave me a better picture. I was stuck between 3060 ti and 3070. In this case 3060 ti is the logical option. I will be using the Colab for training it, and can probably optimise it to run with 8 Gb, if I'm not wrong?
Submitted by DevarshTare t3_11725n6 in MachineLearning
DevarshTare t1_j95ct2p wrote
Reply to [D] Simple Questions Thread by AutoModerator
What matters while running models?
hey guys, I'm new to machine learning and just learning from the basics. I am planning to buy a GPU soon for running pre-built models from google colab.
My question is after you build a model what matters for the models runtime? Is it the Memory, the bandwidth or the cuda core you utilize?
Basically what makes an already trained model run faster when using in application? I can imagine it may vary from application to application, but just wanted to learn what matters the most when running pre trained models?
DevarshTare OP t1_j9ngofc wrote
Reply to comment by ggf31416 in [D] What matters while running models? by DevarshTare
I've seen the same across multiple threads now, the VRAM does make a difference in being able to run a model or having to optimize it. This has been really helpful, thanks a lot guys!