I’m trying to deploy a webservice with fast.ai model. I followed this thread - deploying on render - Deployment Platform: Render ✅ , and it works perfectly.
However, the docker image is so big (4,43Gb) (I added some additional libraries as torchaudio and librosa) that it makes updating the new image so slow and use a lot of network resources.
I’m wondering what do fastai’s folks deploying their webservice ? Do you use Docker with some tips to reduce the image size ? what is the minimum size of image that we can achieve ?
Does Docker has a lot of advantage that we should use it at first. Or we can just use pip or conda and install all we need in a virtual environment on the cloud, then each time we have a release of our project, we just need to install our project there. What is the disadvantage of this method ?
You can find my project here: GitHub - dienhoa/fastai2-Starlette: A Starlette example for deployment in fastai2 with my requirement.txt and DockerFile
if you run the image on CPU only you can save quite a bit of space by installing the Pytorch cpu-only packages.
pip install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
You could also try to install fastai without dependencies and install only the packages you really need.
Other options to reduce the size:
- switch to plain pytorch
- convert to onnx
Thanks a lot @florianl I will try your suggestions and update what I get
After installing only pytorch cpu-only package, I can reduce size of image Docker image from 4.43Gb to 1.99Gb !!
My requirement.txt you can find below
I was trying to install fastai without dependencies but it’s not easy. I need to import load_learner from fastai, and doing that I need to install several - not related - packages (matplotlib, pandas, …). The list of these dependencies was not come directly after one try-error, so I gave after some attempts.
I was trying also without fastai. However, learn.model(x) got an error about shape. I will continue to investigate it another time.
Use learn.predict() to make your predictions so the transforms etc will be Applied.
Ah thanks. I mean, because I want to remove fast.ai from the dependencies, so I don’t have learn.predict anymore. I wanted to use raw pytorch model but it didn’t work. Now thanks to you, I remember that we have the transformation of fast.ai that apply to the image.
So to safely removing fast.ai, I need to copy the transformation to my source code. With that approach, I can reduce my docker Image size a lot, but in the same time, have lots a preprocessing things to do.
I remember @muellerzr (does it bother you that I at mention you in the forum ?) working on something call fastai-minima fastai-minima · PyPI . Is its purpose is to apply in my case (Do inference with raw pytorch without a lot of manually things to do) ?
It’s purpose is to house fastai’s Learner class and Callbacks. Whatever you may need with that is up to you.
Generally if you’re worried about image sizes, it’s recommended to not use fastai at all and just use raw torch in totality (coupled with whatever preprocessing libraries you need).
My new course in a few months will be covering this too.
I also found the answer to a very important question for me. Thanks to everyone for the suggestions!