Fastai to browser pipeline

I have been investigating a bit exporting fastai models to formats that can be run browserside.
There are some caveats but it would be interesting to make a straightforward export path that would require little effort from the user.
The main motivation is to reduce HTTP requests in vision tasks. Not so much for the server load but in order to do rapid predictions from camera feed without having a lot of networking going on, uploading images is quite slow sometimes.

I have looked into ONNX and WebDNN (which also uses ONNX it seems) and Pytorch versions seems to be one.

Here are some related topics:

    cls = ['wolf', 'not_wolf']
    empty_data = ImageDataBunch.single_from_classes(path, cls, tfms=get_transforms(), size=224).normalize(imagenet_stats)
    learn = create_cnn(empty_data, models.resnet50)
    learn.load(model_name)
    learn.model.cpu()
    
    dummy_input = Variable(torch.randn(1, 3, 224, 224).cpu()) # one RGB 224 x 224 picture will be the input to the model
    graph = PyTorchConverter().convert(learn.model, dummy_input)

This generates output (when not using the nightly pytorch/torchvision) but complains about missing features:

RuntimeError: ONNX export failed: Couldn't export operator aten::adaptive_max_pool2d
I plan to investigate if this could be overcome, and what needs to be implemented.

Here is a thread related to that: https://github.com/pytorch/pytorch/issues/5310#issuecomment-383900481
and
https://github.com/pytorch/pytorch/pull/9711

This works to generate an ONNX from the same torch version as fastai uses:

from torch.autograd import Variable

path = Path('/home/toffe/data/wolf_detector')
cls = ['wolf', 'not_wolf']
empty_data = ImageDataBunch.single_from_classes(path, cls, tfms=get_transforms(), size=224).normalize(imagenet_stats)
learn = create_cnn(empty_data, models.resnet50)
learn.load('wolf_not_wolf__res50___stage-2')
learn.model.cpu()

# Export the trained model to ONNX
dummy_input = Variable(torch.randn(1, 3, 224, 224).cpu()) # one RGB 224 x 224 picture will be the input to the model
torch.onnx.export(learn.model, dummy_input, path/"models/wolf.onnx")

Importing the onnx model to onnx-tensorflow should be straightforward like so:

import onnx
from onnx_tf.backend import prepare

onnx_model = onnx.load(path/"models/wolf.onnx")  # load onnx model
output = prepare(onnx_model).run(input)  # run the loaded model

but currently there is this issue:

According to documentation you need to install protbuf before installing onnx if you are using pip, the conda version seemed to not work

pip uninstall onnx
sudo apt-get install protobuf-compiler libprotoc-dev
pip install onnx

Currently the onnx_tf prepare command doesn’t result in an exception, but it just seems to stall and not finish.

It is also possible to run onnx models in the browser using tfjs-onnx, will do some tests with that:

4 Likes

Thanks for the post. I also struggled a lot to install onnx.
I just tested tfjs-onnx demos which work well. Could be the easiest available solution.
Otherwise, there is also this (similar to WebDNN):
https://tvm.ai/
but I never managed to compile a javascript library with it.

1 Like

Hi, I just published my pet project which runs a fastai exported ONNX model in a react app using onnx.js. I posted about it on the forums and on my blog. Please let me know if you spot any mistakes or have questions.

5 Likes