Anyone using heroku to deploy a fastai2 model?

I’m writing the deployment guide for heroku/voila for the new course but I’ve still not been near fastai2.

Who’s deployed to Heroku with the new version?
Are these the torch wheels in your requirements.txt?

https://download.pytorch.org/whl/cpu/torch-1.5.1%2Bcpu-cp36-cp36m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.6.1%2Bcpu-cp36-cp36m-linux_x86_64.whl

If not please reply and tell me which ones you’re using. Thanks.

2 Likes

Hi, I had errors in deployment and finally got it to work.
See my requirements.txt here https://github.com/mahtabsyed/PyTorch-fastaiv2-bears-classification
Heroku deployed link https://bears-pytorch.herokuapp.com/

And thanks to https://github.com/mesw/whatgame3 who helped me fix this.

3 Likes

if the model.pkl is bigger than 500 mb according to this article https://course.fast.ai/deployment_heroku we should add the following code:

import urllib.request

MODEL_URL = "https://drive.google.com/uc?export=download&id=YOUR_FILE_ID"
urllib.request.urlretrieve(MODEL_URL, "model.pkl")

learner = load_learner(Path("."), "model.pkl")

However, can somebody explain where exactly the code above should go?
In the notebook uploaded in github?

This is the code https://github.com/enricodata/emotion-faces that I am trying to deploy on heroku

In your inference notebook to replace the first 2 lines of code. You currently load the model you have stored in your repo. If your model pushes your heroku slug over 500MB just store the model on drive and use that code to download it to your heroku at run time and load that instead.

thanks @joedockrill . However, even if the model.pkl is now stored in google drive when deploying on heroku I get this:
“Compiled slug size: 924.1M is too large (max is 500M).”

Do you know any work around for that?

You still have the pkl in your repo so it’s still being copied to your slug. You need to remove it from there. Even after that 924 sounds a lot.

What’s packaging in your requirements.txt? You don’t seem to be using it.

I am now using this repo without the pkl file:

This is the content of requirements.txt:

voila
fastai2>=0.0.16
pillow>=7.1.0
packaging
ipywidgets==7.5.1

It looks that when it deploys it grows up to 924 MB

Like I said, I think you don’t need packaging in your requirements.txt. have you tried removing that?

Are you indicating you want to use the cpu version of torch in your requirements.txt? You don’t need the full torch package, just the cpu version.

Doh! That’ll be it.

so what do you suggest to put exactly in the requirements.txt?

I tested it with:
voila
fastai2>=0.0.16
pillow>=7.1.0
ipywidgets==7.5.1

and the result is again:
" Compiled slug size: 924.1M is too large (max is 500M)."

I also tested the following.

I changed this in requirements:
voila
fastai2>=0.0.16
pillow>=7.1.0
ipywidgets==7.5.1

into this:
https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp38-cp38-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.7.0%2Bcpu-cp38-cp38-linux_x86_64.whl
fastai==2.0.11
voila
ipywidgets

and the error is the same as above: " Compiled slug size: 924.1M is too large (max is 500M)."

I don’t want to start a new topic about this, so I’m hoping someone notices this. I have gone through all the troubleshooting I could find on this forum, and I have no issues with my slug size and my app is deployed but it only goes as far uploading the 128 by 128 image and doesn’t return the prediction/probability value (even though it works in the Jupyter notebook on paperspace).

I had a look at the build and there’s an error about the pytorch wheels being incompatible, so I’m suspecting that that’s the culprit (if not, then the noose beckons at this point :ghost:).

Is anyone kind enough to tell me which are the latest ones i.e compatible with fast ai 2.2.3? I found this list somewhere, but I couldn’t make heads or tails of it.

.

So I solved the problem. To anyone who may be reading this in the future, have a look at the repos in this thread to see what you need, but remember your requirements.txt needs to have the latest pytorch wheels compatible with:

  1. the version of fastai that you’re using.
  2. the version of python that you’re using.

The list can be found here. If everything is working in Jupyter notebook, but not when you deploy your model then you can check for the version of fastai that you’re using, by typing the following commands in your notebook:

import fastai
print (fastai.__version__)

So you want to include two wheels in your requirements.txt, both starting with:
https://download.pytorch.org/whl

Then from the list you want to choose the ones starting with cpu. First you need a wheel for torch. Choose the linux one, there will be multiple linux ones too, to pick a working one, it’s ideal that it’s compatible with the version of python you’re using. This is reflected in the ‘cp’ part of the link. So, for example cp38 refers to version 3.8 of python (do take this into account if you’re including a runtime.txt in your repo).

Once you make your choice, add it to the hyperlink above, so if you refer to one of the posts above mine. @enr would have chosen “cpu/torch-1.6.0%2Bcpu-cp38-cp38-linux_x86_64.whl”

You do the same thing for the second hyperlink, this time for torchvision rather than torch. I don’t know what version of pytorch is compatible with which version of fastai, but if your app fails due to this issue, then the ‘build logs’ should indicate that the wheel is ‘not supported on this platform’. Errors in the application logs should also guide you in the right direction. (click on ‘More’ in the top right hand corner and choose ‘logs’ from the dropdown ).

I don’t know if I’ve got everything right, I’ve been able to piece everything together thanks to everyone on this forum and particularly due to @joedockrill and @ringoo’s efforts.

1 Like