Heroku and fastai2

Hello,

I’ve spent some time trying to host a simple API POST request that reads a model and returns a prediction, basically the same as the bear app from v3 of the course. The challenge I am encountering is that the requirements are about 900mb all together, and Heroku’s limit is 500mb. My requirements.txt file is this:

flask==1.1.2
fastai2==0.0.17
flask_cors==3.0.8
ipykernel==5.3.0

For fastaiv1, the suggestion was to run setup.py so I could get only the dependencies requires for the functionality I wanted, in this case vision. The setup.py file for v2 seems to not have this functionality yet (or at least I couldnt figure it out). I’m looking for tips to either

  1. get the dependencies under 500mb

or

  1. Get suggestions for another way to do this.

Maybe the only solution is to get a paid account with a few gbs of storage, just wondering if there was an easier way for this.

BONUS: My API is this:

from flask import Flask
from flask import request
from flask_cors import CORS
from fastai2.vision.all import *
import json

app = Flask(__name__)
CORS(app)

path = Path()
learn_inf = load_learner(path/'export_elephants.pkl', cpu=True)

@app.route("/", methods=["POST"])

def predict():
    imagefile = request.files.get('imagefile', '')
    img = PILImage.create(imagefile)
    pred,pred_idx,probs = learn_inf.predict(img)
    maxString = str(probs.max())
    prob = maxString[7:maxString.index(")")]
    return json.dumps({"type":str(pred), "probability":str(prob)})

if __name__ == "__main__":
    app.run(debug=True)

Were you able to fix this?
In my case I took a look at this pytorch stable downloads list
and found these to work:

https://download.pytorch.org/whl/cpu/torchvision-0.7.0%2Bcpu-cp36-cp36m-linux_x86_64.whl

https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp36-cp36m-linux_x86_64.whl

Around 5mb for torch and 150mb for torchvision

How to Deploy Fast.ai Models? (Voilà , Binder and Heroku)

Medium article:

code:

I hope this may help.