Can you install and run fastai in raspberry pi and coral?

My manager and I are planning to run fast ai (train models, fine tune them, transform images…) with raspberry pi and coral.

I have found guides on running fastai and pytorch in raspberry pi, but I am not sure if they are compatible with Google Coral.

Is fastai + pytorch compatible with Coral TPUs? If not, are there alternatives (Jetson Nano…)?

1 Like

Fastai is essentially a wrapper around PyTorch, so there’s no fastai + PyTorch, only PyTorch. As far as I’m aware, Coral TPUs are only compatible with TensorFlow, but you could write/train the bulk of your model in PyTorch, convert it to TensorFlow with ONNX, and do the rest in TensorFlow. Alas, ONNX may prove to be quite tricky, and you’d still have to work with TensorFlow at some point, so you could argue this is the worst of both worlds.

If you don’t want to go near TensorFlow (completely understandable), I think the Jetson Nano is your best bet since it supports PyTorch and has an active community. Beware though, I’ve been dabbling with TinyML & microcontrollers for a while, and TensorFlow (TensorFlow Lite and TensorFlow Lite for Microcontrollers specifically) is much more mature for deployment on Edge devices, pruning, quantization, etc. For the time being at least, it wouldn’t hurt to learn TF and give it a shot.

Hopefully this helps!

3 Likes

Hi @BobMcDear great answer. I asked the FastAI discord server and found very interesting resources (thanks to discord user @ Roshi).

Basically they guide you on how to convert the pytorch model with ONNX so you can use it with Coral TPU.

As far as I have read (but I am a very beginner), ONNX is not very complex to use to convert pytorch models but I am probably wrong for what you say.

Example code I found to process Pytorch model:

// Code written by @ Roshi
torch.onnx.export(
    learn.model, # the model
    torch.randn(1, 3, 224, 224, device='cpu'), #Dummy input
    "PATH_TO_PRINT_MODEL", #Name your model and where you want to put it
    input_names=["input"], # No clue
    output_names=['output'] #Yes...
)

I think NVIDIA products may be more suitable in this case. I will have to study jetson nano in more detail. Do you know if its compatible with Raspberry Pi or do I need the developer kit?

And thank you very much.

1 Like

Regarding ONNX’s ease-of-use, it really depends. Are you converting an image classifier with well-known operations, or is your model a custom-built CNN with recent state-of-the-art operations not tested with ONNX? For example, if you’re using a popular library like Timm, the odds are conversion with ONNX would be straightforward, but if you’re implementing a new architecture by yourself from scratch, you are most certainly going to encounter errors down the road. Needless to say, you’d eventually fix them, but there may have been faster solutions (TensorFlow) to begin with. In short, if you’re using popular libraries/models/etc. for simple tasks like classification, ONNX shouldn’t be too challenging. Otherwise, TensorFlow may be the simpler and faster option.

Would you please elaborate? The Jetson Nano is a stand-alone device, no need for a Raspberry Pi. They’re mostly substitutes, not compliments :slight_smile:.

Cheers!

1 Like

I am planning to use fastai or hugging-face I do not expect to create or modify a current architecture. But it seems a good idea to make a more long term decision and switch to tensorflow and coral or stick with pytorch and use Nvidia.

So I do not need a developer kit? I can simply plug it to a raspberry pi (or other computer) and do the inferencing with jetson Nano once I have the model trained?

1 Like

The developer kit is for, well, developing, and it comes with a carrier board. The module by itself is meant for production, and it is somehow costs $30 more (I believe that’s got to do with the warranty & support). If you get the developer kit, you would set it up with your computer, and you could, for example, attach it to your Raspberry Pi via the carrier board to use its camera and perform object detection.

However, if you purchase solely the module, things would be more complicated. For starters, you would need a carrier board of your own (list of supported boards here), and it would naturally be harder to initially set it up. The reason you may want to go with this option is if you already have an existing infrastructure and would like to integrate your machine learning model(s) into with the help of Jetson, in which case the developer kit is not a preferable option.

Does that make sense? Please do let me know if you had any other questions.

Have a nice afternoon!

2 Likes

Yes it does now I understand. Thanks!

1 Like