just back from mobile world congress, where I saw a lot of emphasis on edge computing and doing A.I inference locally on mobile devices, communicating with the cloud only when totally essential;
I saw many examples having to do with IoT for example, sensors + A.I, doing inference locally without connections, and only when ‘x’ result happens (fraud detected, or fire detected, etc) and it’s totally essential, the device connects to the cloud
at the moment I do experiments with render.com and other services when I use fast.ai and I would love to know what’s the latest in this area with fast.ai and pytorch, are there any future plans, upcoming stuff regarding local deployment to run inference locally on mobile devices without having to connect to a backend (doing it just when it’s essential, to update the local weights, etc)? like having similar funcionality to for example tensorflow.js but with pytorch/fast.ai? or if nothing similar is planned, is it possible to train a net using fast.ai, export the pkt and use it locally on a mobile device through tensorflow.js?
This article seems to suggest that we can convert from pytorch/fastai to tf:
so I guess that would be one way, although I wish we could do fast.ai end to end, on server and on local device with no connection, now that would be perfect
Pytorch is getting more integration with other libraries through ONNX. And there are a couple tutorials around on getting pytorch models to caffe2 for mobile. When I tried converting models to tensorflow I got quite a few errors but maybe the libraries have improved their integration by now. You wouldn’t be able to just export the pkl file and expect it to work though - you’d have to use the saved model and then write some custom tf code replicating this to handle loading the data and creating transforms.
hey Tom, thank you very much, that sounds good, I bet that in a few months the workflows of these kind of processes will be more and more clear and smooth, will check the links you shared, thanks a lot
You can run a pytorch model (the one that you trained with FastAI) on a device natively using Pytorch Mobile right now.
It’s pretty straightforward but maybe it helps someone without a very deep understanding or maybe it helps to spread the existence of Pytorch Mobile.
I’m posting here because I found this post (by searching how to deploy a native solution on a device) previously I found about the existence of Pytorch Mobile