Has anyone done a project where they deployed their fastai-vision project on raspberry pi… my multi-label prediction model appear to be working well—beyond my imagination !!! and i’ll like to learn how to deploy on a raspberry pi…
I would really like this too. It would really be interesting to see what the prediction time pr image is.
I managed to finally get fastai up and running on a Raspberry PI Zero - it’s not fast, but it seems to work.
I gave up trying to install the
spacy dependency - best I understand it does’t really compile on arm - but that dependency is only needed for the fastai text package, so for vision it should work.
If you increase swap size on the raspberry, it actually works to compile pytorch on the raspberry itself, but the compilation will take hours and hours (I gave up after ~10 hours).
In the end I cross-compiled as much as possible on a desktop, and I build PyTorch with these flags:
If you want to try it yourself, get my built python wheels here:
Make sure you get Python 3.6 on the raspberry first, I used Berryconda: https://github.com/jjhelmus/berryconda
Then install the wheels:
pip install numpy-1.17.2-cp36-cp36m-linux_armv6l.whl
pip install numexpr-2.7.0-cp36-cp36m-linux_armv6l.whl
pip install Bottleneck-1.2.1-cp36-cp36m-linux_armv6l.whl
pip install Pillow-6.1.0-cp36-cp36m-linux_armv6l.whl
pip install torch-1.2.0a0+8554416-cp36-cp36m-linux_armv6l.whl
pip install torchvision-0.4.0a0+d31eafa-cp36-cp36m-linux_armv6l.whl
pip install fastai --no-deps
Inference time with a resnet34 model is around 52s per image on a Raspberry Pi zero - not exactly fast, but enough for my use-case
hi @Peksa thanks for sharing your work~!
New to the FastAi and AI world, so I have
- may I just install the standard Python, PyTorch and Fastai to my Raspberry Pi without compiling PyTorch?
- do you need to compile Fastai on Raspberry Pi? How do I know which library or framework I need to compile on Raspberry Pi?
- What does INFERENCE TIME mean? May I understand it as the time of calling learn.predict()?