So I solved the problem. To anyone who may be reading this in the future, have a look at the repos in this thread to see what you need, but remember your requirements.txt needs to have the latest pytorch wheels compatible with:
- the version of fastai that you’re using.
- the version of python that you’re using.
The list can be found here. If everything is working in Jupyter notebook, but not when you deploy your model then you can check for the version of fastai that you’re using, by typing the following commands in your notebook:
import fastai
print (fastai.__version__)
So you want to include two wheels in your requirements.txt, both starting with:
“https://download.pytorch.org/whl”
Then from the list you want to choose the ones starting with cpu. First you need a wheel for torch. Choose the linux one, there will be multiple linux ones too, to pick a working one, it’s ideal that it’s compatible with the version of python you’re using. This is reflected in the ‘cp’ part of the link. So, for example cp38 refers to version 3.8 of python (do take this into account if you’re including a runtime.txt in your repo).
Once you make your choice, add it to the hyperlink above, so if you refer to one of the posts above mine. @enr would have chosen “cpu/torch-1.6.0%2Bcpu-cp38-cp38-linux_x86_64.whl”
You do the same thing for the second hyperlink, this time for torchvision rather than torch. I don’t know what version of pytorch is compatible with which version of fastai, but if your app fails due to this issue, then the ‘build logs’ should indicate that the wheel is ‘not supported on this platform’. Errors in the application logs should also guide you in the right direction. (click on ‘More’ in the top right hand corner and choose ‘logs’ from the dropdown ).
I don’t know if I’ve got everything right, I’ve been able to piece everything together thanks to everyone on this forum and particularly due to @joedockrill and @ringoo’s efforts.