I’m working with a friend trying to follow the fast.ai tutorial for AWS deployment but instead of image classification we are implementing object detection. Unfortunately, we are stuck in the last step:
In this tutorial, fast.ai is only used to build the model. For the prediction inside AWS, the code calls some PyTorch functions to load the model and then obtain the image class. In this case it is sufficient to load all the libraries from a public Lambda Layer that contains PyTorch, but does NOT contain fast.ai.
Now, the object detection functions that Jeremy wrote need to call some fast.ai functions. This means that we have to load the library into the Lambda instance. Our first approach was to to upload the fast.ai library as part of the execution code, but we couldn’t isolate it from the Pytorch Layer when building the project. We uploaded the whole package (which contained both fast.ai and Pytorch) but it was too heavy for the Lambda instance to load.
Then we tried to create a Layer ourselves that contained fast.ai and PyTorch. When deployed, it weights 640 MB, more than the 500 MB allowed (see “Q: What if I need scratch space on disk for my AWS Lambda function?”). When running a test, the logs show this error: “module initialization error: [Errno 28] No space left on device”.
The next step is to try to manually separate the fast.ai dependencies and upload some of them in the Lambda Layer, and some as part of the execution code, so as to divide the load of the libraries into different folders. We’re afraid that this will take too much time to achieve, and we’re not even sure that it will actually work.
We have managed to run our code locally using Docker, so we are sure that the program runs. This issue is the only thing holding us back. Any question and/or suggestion is very welcome!