I’ve recently done this after watching Jeremy’s Part1 v2 class.
One different from my approach with what I found online is I used PyTorch, instead of Tensorflow/Keras, and I didn’t want to convert the model to Tensorflow. It’s a resnet101 model with an
AdaptiveConcatPool2d layer as the penultimate layer (ie. what the Fast.ai
ConvLearner would do if you set
As a result, I couldn’t deploy to Google Cloud ML, so I created a Docker image and deployed to Digital Ocean instead.
The main challenge was getting the right setup for the docker image, which was actually way harder than I expected. I’ve pasted the
requirements.txt below in the hopes that it’ll save someone else a lot of time. If anyone has suggestions on how I can make the config better, please let me know! I’m definitely not a devops guy, so this was all pretty challenging to me.
Also, for my resnet101, I had to increase the amount of RAM dedicated to Docker to 4GB or else it would run out of memory.
FROM ubuntu:16.04 RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ ca-certificates \ cmake \ curl \ gcc \ git \ libatlas-base-dev \ libboost-all-dev \ libgflags-dev \ libgoogle-glog-dev \ libhdf5-serial-dev \ libleveldb-dev \ liblmdb-dev \ libopencv-dev \ libprotobuf-dev \ libsnappy-dev \ protobuf-compiler \ python-dev \ python-numpy \ python3-pip \ python-scipy \ python3-setuptools \ vim \ unzip \ wget \ zip \ && \ rm -rf /var/lib/apt/lists/* # Source Code WORKDIR /app ADD . /app # Install any needed packages specified in requirements.txt RUN pip3 install --upgrade pip RUN pip3 install --trusted-host pypi.python.org -r requirements.txt # Make ports 80 or 4000 available to the world outside this container # EXPOSE 80 EXPOSE 4000 # Run app.py when the container launches CMD ["python3", "app.py"]
Flask numpy pillow pandas http://download.pytorch.org/whl/cu80/torch-0.3.1-cp35-cp35m-linux_x86_64.whl torchvision torchtext