Hosting a simple web application

(Chan Sooi Loong) #1

Hi all,

I have created a simple web application built with just Flask and Fastai based on Kaggle Dog Breed Identification that allows user to upload a photo, and then model will predict the the classes of the dogbreeds. I have successfully run it on my local machine, but am having difficulties to host it on Heroku.

Not sure why the torch package cannot be found. Will try to downgrade to version 0.1.2, 0.1.2.post1 and try again.

Collecting torch==0.3.0.post4 (from -r /tmp/build_e09cdcbfaa2f8952363f0a52ea14bf40/requirements.txt (line 78))
Could not find a version that satisfies the requirement torch==0.3.0.post4 (from -r /tmp/build_e09cdcbfaa2f8952363f0a52ea14bf40/requirements.txt (line 78)) (from versions: 0.1.2, 0.1.2.post1)
No matching distribution found for torch==0.3.0.post4 (from -r /tmp/build_e09cdcbfaa2f8952363f0a52ea14bf40/requirements.txt (line 78))
  1. Does anyone has experience deploying it to heroku or other cloud providers (AWS, google cloud, pythonanywhere, digital ocean)?

  2. For this application, I am using the following approach, load the resnext model, load pretrained weights, and perform forward passing for prediction.

     arch = resnext101_64        
     #load pretrained model
     learn = ConvLearner.pretrained(arch, data, precompute=False)
     #load trained weights

    This would require the application to load the pertained weights (330 MBs) and trained weights (330 MBs). This has made the application to be too huge and deployment painfully slow.
    Is there a way where we can import the model without using ConvLearner.pretrained ? (I am hoping to omit the pretrained weights). I have tried ConvLearner.from_model_data, but it does not work in this case.

Thanks a lot.

(Kevin Wong) #2

really cool application, i am waiting to hear updates

(Kevin Wong) #3

I was looking at some flask implementations of “not hot dog” maybe that can help you

(Ryder) #4

+1 for an answer to this question. Its fun and motivating to be able to make a an accessible demo of our learner.


i’m trying to do this same exact thing, any luck?

(Igor Kasianenko) #6

It might be connected with your local host env. If you run something on heroku, it has different parameters. First of all, it might not have gpu.
So your model should run on cpu. If there is an option to save model for production - it should be the way.

If you don’t find the way to save model for inference (== production), hacky way would be installing on heroku.

I’m also interested, is it an option to run model without gpu, so let’s crack this topic %)

by the way, the link on github - his demo didn’t work for me.

(Chan Sooi Loong) #7

Hey guys, I have managed to host in on Google Cloud Engine, albeit some hacky approach. I will write a blog on my experience when I have time soon.

As for the heroku error above, it is due to the torch package not available in pip. To fix it, replace torch with in the requirements.txt:
But heroku won’t be able to run the web app after I deploy it as the RAM is just 512MB on free tier. The inference model forward pass on CPU takes much more memory than that.

(amine) #8

It will be very helpful if anyone that has succeeded to install and run fastai env in production could share with us his experience since i am trying to build a small web api to do predictions using fastai library


can you link to your google cloud implementation in the meantime? would appreciate the blog post also!! :slight_smile:

(Jesús Pérez) #10

hello @jakcycsl, can you provide more info of how you build your app? :slight_smile:

(Daniel Abban) #11

Hi jakcycsi,

Do you mind sharing the code you used for building the web app?

(Dave Luo) #12

Hi everyone, just came across this thread. ICYMI, I posted a step-by-step overview (and github repo) of creating and deploying a web app demo of a fastai-trained model with flask, gunicorn, nginx, and remote hosting: