Anyone using heroku to deploy a fastai2 model?

I finally managed to solve my problem. I created a repo from scratch on github (my repo is now at: GitHub - badgiojuni/Bear-finder: I successfully deployed a web application from Lesson2/3 of fastai book on mybinder & heroku. It allows to differentiate a bear from a teddy bear, you can find the app at: https://bear-app69.herokuapp.com/) and used the ‘requirements.txt’ from How to Deploy Fast.ai Models? (Voilà , Binder and Heroku) | by Aravinda Gayan | unpackAI | Medium and the steps from Deploying your notebook as an app under 10 minutes to deploy it on Binder. It worked well, I think I did smth wrong when I followed the git related steps (the git track “*.pkl” command wasn’t of any use for me, instead I just added the export.pkl file to the cloned repo and it worked well).
It worked as well on heroku by adding a procfile file as it is explained in the medium article from above. This was the only thing I needed to add to make it work on heroku after having made it work on binder. Thanks for the helpful indications on the forum

How to Deploy Fast.ai Models? (Voilà , Binder and Heroku)

Medium article:
https://medium.com/unpackai/how-to-deploy-fast-ai-models-8704ea711ad2

code:

I hope this may help.

2 Likes

There is a new problem with deploying the app. The new version 8.3.0 of Pillow breaks the predictions so the heroku apps won’t display any predictions. RuntimeError: Could not infer dtype of PILImage - #2 by crissman

I added the old version of Pillow to the requirements and the app now works on heroku and binder. This is what my requirements.txt looks like now:

https://download.pytorch.org/whl/cpu/torch-1.9.0%2Bcpu-cp38-cp38-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.10.0%2Bcpu-cp38-cp38-linux_x86_64.whl
fastai==2.4
voila==0.2.10
ipywidgets==7.5.1
pillow==8.2

1 Like

Here is what works for me trying to deploy the bear classifier model using heroku in 2022

# requirements.txt
-f https://download.pytorch.org/whl/torch_stable.html

torch==1.11.0+cpu

torchvision==0.12.0+cpu

fastai==2.6.3

voila

ipywidgets

Why this works and not the above suggestions:

  • Mainly because fastai has been updated and each fastai version specifies which torch and torchvision module to be compatible with it

You can see this by trying to deploy to heroku with this requirements.txt file and watch which fastai and torch it downloads/requires

# requirements.txt without specifying wheels for torch
fastai

ipywidgets

voila

For example, my fastai version is 2.6.3 and it requires:

Collecting torch<1.12,>=1.7.0
         Downloading torch-1.11.0-cp310-cp310-manylinux1_x86_64.whl (750.6 MB)


Collecting torchvision>=0.8.2
         Downloading torchvision-0.12.0-cp310-cp310-manylinux1_x86_64.whl (21.0 MB)

From this log, we can modify the requirements.txt accordingly (with the pattern above) so that the right version is used

torch==1.11.0+cpu

torchvision==0.12.0+cpu

fastai=2.6.3

Hope this helps someone trying this deployment in the future

For simple deployments Gradio+HF spaces seems like a good combo. Jeremy has covered these recently in some of his 2022 Kaggle notebooks (check out his Kaggle notebooks).

1 Like