[D] How to get the fastest PyTorch inference and what is the "best" model serving framework? Submitted by big_dog_2k t3_yg1mpz on October 28, 2022 at 9:51 PM in MachineLearning 31 comments 55
jukujala t1_iu7x05z wrote on October 29, 2022 at 6:30 AM Has anyone tried to transform ONNX to TF SavedModel and use TF serving? TF at least in the past was good at inference. Permalink 2
Viewing a single comment thread. View all comments