I’m trying to deploy a webservice with fast.ai model. I followed this thread - deploying on render - Deployment Platform: Render ✅ , and it works perfectly.
However, the docker image is so big (4,43Gb) (I added some additional libraries as torchaudio and librosa) that it makes updating the new image so slow and use a lot of network resources.
I’m wondering what do fastai’s folks deploying their webservice ? Do you use Docker with some tips to reduce the image size ? what is the minimum size of image that we can achieve ?
Does Docker has a lot of advantage that we should use it at first. Or we can just use pip or conda and install all we need in a virtual environment on the cloud, then each time we have a release of our project, we just need to install our project there. What is the disadvantage of this method ?
I was trying to install fastai without dependencies but it’s not easy. I need to import load_learner from fastai, and doing that I need to install several - not related - packages (matplotlib, pandas, …). The list of these dependencies was not come directly after one try-error, so I gave after some attempts.
I was trying also without fastai. However, learn.model(x) got an error about shape. I will continue to investigate it another time.
Ah thanks. I mean, because I want to remove fast.ai from the dependencies, so I don’t have learn.predict anymore. I wanted to use raw pytorch model but it didn’t work. Now thanks to you, I remember that we have the transformation of fast.ai that apply to the image.
So to safely removing fast.ai, I need to copy the transformation to my source code. With that approach, I can reduce my docker Image size a lot, but in the same time, have lots a preprocessing things to do.
I remember @muellerzr (does it bother you that I at mention you in the forum ?) working on something call fastai-minima fastai-minima · PyPI . Is its purpose is to apply in my case (Do inference with raw pytorch without a lot of manually things to do) ?
It’s purpose is to house fastai’s Learner class and Callbacks. Whatever you may need with that is up to you.
Generally if you’re worried about image sizes, it’s recommended to not use fastai at all and just use raw torch in totality (coupled with whatever preprocessing libraries you need).
My new course in a few months will be covering this too.