[D] How to get the fastest PyTorch inference and what is the "best" model serving framework? Submitted by big_dog_2k t3_yg1mpz on October 28, 2022 at 9:51 PM in MachineLearning 31 comments 55
big_dog_2k OP t1_iu86mb7 wrote on October 29, 2022 at 8:52 AM Reply to comment by ibmw in [D] How to get the fastest PyTorch inference and what is the "best" model serving framework? by big_dog_2k Thanks! It sounds like investing time in onnx and using triton is the best bet. Permalink Parent 2
Viewing a single comment thread. View all comments