Submitted by harishprab t3_yga0s1 in MachineLearning
harishprab OP t1_iucy0ne wrote
Reply to comment by PlayOffQuinnCook in [R] Open source inference acceleration library - voltaML by harishprab
We use the TorchFX library to do this on CPU. And TensorRT is doing this for GPU. We’re not using any custom function for the fusing. TorchFX and TensorRT are doing it anyways
PlayOffQuinnCook t1_iueq6l4 wrote
I understand that. But let’s say I have these operators named as c1, b1, r1 instead of what it expects, the fusion logic won’t work. So my question was if this library works only a fixed set of models defined in the library itself or it can work against any models users write.
Viewing a single comment thread. View all comments