Inference locally on device, state of fast.ai/pytorch

Hi friends,
just back from mobile world congress, where I saw a lot of emphasis on edge computing and doing A.I inference locally on mobile devices, communicating with the cloud only when totally essential;
I saw many examples having to do with IoT for example, sensors + A.I, doing inference locally without connections, and only when ‘x’ result happens (fraud detected, or fire detected, etc) and it’s totally essential, the device connects to the cloud

at the moment I do experiments with render.com and other services when I use fast.ai and I would love to know what’s the latest in this area with fast.ai and pytorch, are there any future plans, upcoming stuff regarding local deployment to run inference locally on mobile devices without having to connect to a backend (doing it just when it’s essential, to update the local weights, etc)? like having similar funcionality to for example tensorflow.js but with pytorch/fast.ai? or if nothing similar is planned, is it possible to train a net using fast.ai, export the pkt and use it locally on a mobile device through tensorflow.js?

This article seems to suggest that we can convert from pytorch/fastai to tf:

so I guess that would be one way, although I wish we could do fast.ai end to end, on server and on local device with no connection, now that would be perfect :wink:

thank you :wink:

1 Like

Pytorch is getting more integration with other libraries through ONNX. And there are a couple tutorials around on getting pytorch models to caffe2 for mobile. When I tried converting models to tensorflow I got quite a few errors but maybe the libraries have improved their integration by now. You wouldn’t be able to just export the pkl file and expect it to work though - you’d have to use the saved model and then write some custom tf code replicating this to handle loading the data and creating transforms.

1 Like

hey Tom, thank you very much, that sounds good, I bet that in a few months the workflows of these kind of processes will be more and more clear and smooth, will check the links you shared, thanks a lot :wink:

1 Like

Agreed! I think the competition from PyTorch has finally gotten tensorflow to improve. Let me know if you manage to get something working! :slight_smile:

1 Like

definitely! thank u :wink:

Hello everyone!
Picking up this thread - has anyone got any updates on deploying fastai models locally on mobile devices.

Thank you.

Me too interested if someone has tested a reliable path for mobile deployment

I’m also highly interested by this ! I’ve seen that TF lite runs pretty well on mobiles, so if there was a way to convert fastai models to TF Lite, this could also work great !

@javismiles pytorch models can be deployed directly to the Jetson devices from NVIDIA, thanks to the magic of docker. This article gives a good overview

1 Like

You can run a pytorch model (the one that you trained with FastAI) on a device natively using Pytorch Mobile right now.

It’s pretty straightforward but maybe it helps someone without a very deep understanding or maybe it helps to spread the existence of Pytorch Mobile.
I’m posting here because I found this post (by searching how to deploy a native solution on a device) previously I found about the existence of Pytorch Mobile

3 Likes

@mariano22 fantastic, thank you very much for sharing the info and link :slight_smile:

1 Like