[D] How to get the fastest PyTorch inference and what is the "best" model serving framework? Submitted by big_dog_2k t3_yg1mpz on October 28, 2022 at 9:51 PM in MachineLearning 31 comments 55
robdupre t1_iu7z0uu wrote on October 29, 2022 at 6:59 AM We use onnx models deployed using Nvidias tensorRT. We have been impressed with it so far Permalink 3
Viewing a single comment thread. View all comments