Running out of memory when deploying an extremely simple Flask app in Heroku

Hi everyone!

Thanks for this amazing framework Fastai.

I want to deploy a simple model (resnet34).

My whole flask app is a single file:

from flask import Flask
from fastai.vision.all import *

app = Flask(__name__)

learn = load_learner("./export.pkl")

@app.route("/<path:image_url>")
def hello_world(image_url):
    print(image_url)
    response = requests.get(image_url)
    img = PILImage.create(response.content)
    predictions = learn.predict(img)
    print(predictions)
    return predictions[0]

It works fine a couple of times, but heroku then starts logging things like:

These are my requirements.txt:

-f https://download.pytorch.org/whl/torch_stable.html

torch==1.8.1+cpu
torchvision==0.9.1+cpu
fastai>=2.3.1
Flask==2.0.1
gunicorn==20.1.0

Pillow

requests==2.26.0


I’m pretty new to all of these MLOps so any help is appreciated :slight_smile:

I’m pretty sure that the size of resnet34 is bigger than the half gig of memory you have.