Hosting a simple web application


(Chan Sooi Loong) #1

Hi all,

I have created a simple web application built with just Flask and Fastai based on Kaggle Dog Breed Identification that allows user to upload a photo, and then model will predict the the classes of the dogbreeds. I have successfully run it on my local machine, but am having difficulties to host it on Heroku.

Not sure why the torch package cannot be found. Will try to downgrade to version 0.1.2, 0.1.2.post1 and try again.

Collecting torch==0.3.0.post4 (from -r /tmp/build_e09cdcbfaa2f8952363f0a52ea14bf40/requirements.txt (line 78))
Could not find a version that satisfies the requirement torch==0.3.0.post4 (from -r /tmp/build_e09cdcbfaa2f8952363f0a52ea14bf40/requirements.txt (line 78)) (from versions: 0.1.2, 0.1.2.post1)
No matching distribution found for torch==0.3.0.post4 (from -r /tmp/build_e09cdcbfaa2f8952363f0a52ea14bf40/requirements.txt (line 78))
  1. Does anyone has experience deploying it to heroku or other cloud providers (AWS, google cloud, pythonanywhere, digital ocean)?

  2. For this application, I am using the following approach, load the resnext model, load pretrained weights, and perform forward passing for prediction.

     arch = resnext101_64        
     #load pretrained model
     learn = ConvLearner.pretrained(arch, data, precompute=False)
     #load trained weights
     learn.load('224_pre_resnext101_64') 
    

    This would require the application to load the pertained weights (330 MBs) and trained weights (330 MBs). This has made the application to be too huge and deployment painfully slow.
    Is there a way where we can import the model without using ConvLearner.pretrained ? (I am hoping to omit the pretrained weights). I have tried ConvLearner.from_model_data, but it does not work in this case.

Thanks a lot.


(Kevin Wong) #2

really cool application, i am waiting to hear updates


(Kevin Wong) #3

I was looking at some flask implementations of “not hot dog” maybe that can help you


(Ryder) #4

+1 for an answer to this question. Its fun and motivating to be able to make a an accessible demo of our learner.


#5

i’m trying to do this same exact thing, any luck?


(Igor Kasianenko) #6

It might be connected with your local host env. If you run something on heroku, it has different parameters. First of all, it might not have gpu.
So your model should run on cpu. If there is an option to save fast.ai model for production - it should be the way.

If you don’t find the way to save model for inference (== production), hacky way would be installing fast.ai on heroku.

I’m also interested, is it an option to run model without gpu, so let’s crack this topic %)

by the way, the link on github - his demo didn’t work for me.


(Chan Sooi Loong) #7

Hey guys, I have managed to host in on Google Cloud Engine, albeit some hacky approach. I will write a blog on my experience when I have time soon.

As for the heroku error above, it is due to the torch package not available in pip. To fix it, replace torch with in the requirements.txt: http://download.pytorch.org/whl/cpu/torch-0.3.1-cp36-cp36m-linux_x86_64.whl
But heroku won’t be able to run the web app after I deploy it as the RAM is just 512MB on free tier. The inference model forward pass on CPU takes much more memory than that.


(amine) #8

It will be very helpful if anyone that has succeeded to install and run fastai env in production could share with us his experience since i am trying to build a small web api to do predictions using fastai library


#9

can you link to your google cloud implementation in the meantime? would appreciate the blog post also!! :slight_smile:


(Jesús Pérez) #10

hello @jakcycsl, can you provide more info of how you build your app? :slight_smile:


(Daniel Abban) #11

Hi jakcycsi,

Do you mind sharing the code you used for building the web app?


(Dave Luo) #12

Hi everyone, just came across this thread. ICYMI, I posted a step-by-step overview (and github repo) of creating and deploying a web app demo of a fastai-trained model with flask, gunicorn, nginx, and remote hosting:


(Xovo Larjem) #13

Oh lol! So am not just along stuck in this haha, did you get any solution or rather did luck favor?

With Regards,
Xovo Larjem https://putlocker.ooo/
https://viamichelin.onl/
https://googleearth.onl/


(Cedric Chee) #14

In my own experience, tl;dr; most of my recent production deployment successes are thanks to Docker container, simplifying the deployment process and reduce the frictions.

For example, on Heroku, you can customize your stack and deploy using container by creating and building your own image up to your runtime environment specs you need like PyTorch version 0.3.x, Python 3.6, etc if your model is optimized to run on CPU. It’s hard but possible to make this works on the free hobby plan. That’s for the model service (Flask). As for the front-end web (interface) and web back-end (I use Django model/ORM and REST Framework for API to save front-end data to PostgreSQL DB). Both of this can be easily fit and run well on Heroku for free hobby project.

Yeah, highly do check out Dave’s post. I get a lot done with his idea and it took me far enough until our traffic volume out scale this architecture. For non-serious / hobby project with free or low cost plan, I use Heroku :smiley:


(Shankar) #15

A lot of people are struggling to deploy their app on heroku, I have written a detailed guide on github in case anyone needs it.

I am also writing a blog which will be coming soon :slight_smile: