Hosting a simple web application

I’m in the same spot and came here to look for some input. Works like a charm on localhost, but didn’t see the Heroku slug size limitation coming.

My only hints so far are that, as you mentioned, PyTorch has the CPU version which you can find a .whl for on their website and it’s about 68MB. However the instructions to install the wheel are followed by a “pip3 install torchvision” command, and through doing that, it starts collecting the normal CUDA torch which is about 580MB alone and the main culprit here. Another issue is that installing fastai starts this same process of collecting torch etc. There are some instructions in the docs under installation where you can control which dependencies get installed, however while the pieces of the puzzle seem to be here, I’m overwhelmed at this point trying to navigate all this through Heroku. I’ll update if I figure it out.

EDIT: Getting it to work on Heroku:

Surprise surprise, the docs had the simple answer. Installing torchvision separately etc. wasn’t necessary. One thing though, the wheel there is out of date, although it might still work, I didn’t test it.

What I did to get it working for myself was add the correct wheel for pytorch(https://pytorch.org/get-started/locally/ , Stable, Linux, Pip, Python 3.6, Cuda: None) and fastai to requirements.txt.

Make sure you don’t have separate lines for torch, torchvision, I also removed my old fastai line which used to read fastai==1.0.51. I left the other dependencies in place for starters and it ended up working out.

Add these lines to requirements.txt:

https://download.pytorch.org/whl/cpu/torch-1.0.1.post2-cp36-cp36m-linux_x86_64.whl
fastai

With my other libraries, that left my slug at roughly 340MB. That apparently isn’t great, and there are ways to reduce that to improve performance, or so I hear, but that will get you up and running without hitting the 500MB roof.

3 Likes