seek_it t1_iwyyo7k wrote
Can someone explain how can it's inference be that fast that it can process realtime?
Ok-Alps-7918 t1_iwz05ql wrote
Using architecture optimised for mobile (mobilenet-style), compiling the ML model for mobile (e.g. AI accelerator chip for iOS devices), quantisation of the model, pruning, etc. I’d also imagine it’s being done locally on the device instead of the cloud.
seek_it t1_iwz0epu wrote
That's why I'm even more surprised. Such kind of models are usually GAN based that inference still require good compute power! On-device inference is even more astonishing!
pennomi t1_iwz8l6m wrote
It’s likely not a GAN.
Ok-Western2685 t1_iwzos8k wrote
It actually is, and indeed runs on the device.
They did amazing work in that regard!
[deleted] t1_ix1hb7r wrote
[deleted]
Viewing a single comment thread. View all comments