PyTorch at the Edge: Deploying Over 964 TIMM Models on Android with TorchScript and Flutter

Hello Fastai friends, I wrote a blog about how to pick a model from TIMM, train them with Fastai and deploy them on Android, for free.

Here’s the TLDR on Twitter -

Here’s the full blog post -

And the GitHub repo -

3 Likes

In this blog post :

you export model , pkl extension
then few lines later u use pt extension :slight_smile:
ex : torchscript_edgenext_xx_small.pt
How do I covert my pkl exported model to Torchscript not pt extension ?
Thx

Hello @bahman_apl I used the following codes to export to torchscript. As you can see the model is taken from the Learner with learn.model

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

learn.model.cpu()
learn.model.eval()
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(learn.model, example)
optimized_traced_model = optimize_for_mobile(traced_script_module)
optimized_traced_model._save_for_lite_interpreter("torchscript_edgenext_xx_small.pt")

You can get the model from the .pkl file by using the load_learner function.

learner = load_learner('export.pkl')

The load_learner function automatically loads the exported model, data, and other necessary information to create the Learner object. Once loaded, you can use it to make export models, make predictions, fine-tune the model, and perform other tasks.

1 Like

For me first I should load model

learner = load_learner('export.pkl')

then do this :
learner.model.cpu()
continue to the end and
Last line I should name my model whatever I like and use mymodel.pt for predictions ?
optimized_traced_model._save_for_lite_interpreter(“mymodel.pt”)

Yes that’s right :+1:

1 Like

When I attempt to load the model, it gives the exception that flutter things the asset is empty or nonexistent.