Viewing a single comment thread. View all comments

sarmientoj24 t1_ixvouw5 wrote

The thing is, either you wait for the TF support or you code it yourself. Research is fast paced so researchers want a boilerplate, or just to build on top of another repo.

For deployment, there's a lot of deployment tools for Pytorch nowadays. I use BentoML for deploying compvision models. If you want a lighter model, there are libraries and repositories that support sparsification, pruning, etc. of your models. A lot of newer repos for industrial use (mmdet, YOLOv5/6/7) have sparsification and pruning support. Some of them even have distillation built-in. Again, I am not a TF guy so I havent seen such rich support for model deployment than these Pytorch-based repos.

While it is true that research and deployment are different, it is up to your team how to do MLOps on those SOTA methods. New SSL method, new loss functions, new stuff -- either you borrow it from a repo or you code it yourself. In the industry, the prototyping procedure for experimentatio is critical for a fast-moving product. You dont have time to convert codes. So you test repositories, then build on top of it if it is successful.

6

erannare t1_ixvt17s wrote

TensorFlow has model optimization libraries such as, but not limited to: weight clustering, pruning, and weight quantization, as well as training support for these.

2

sarmientoj24 t1_ixw1ca2 wrote

That's great. Again, I am not used to TF. My point was that Pytorch has deployment capabilities near TF's nowadays and that most SOTA and new research are leaning towards publishing their code with Pytorch.

1