You could use FastAI just to do the training and TensorRT to do the inference on the nano. I did this with the pets from lesson 1 (resnet 50 299x299 images) and found it took ~100ms to do the inference (in C++). This far worse than what should be expected, but at this stage I’m just trying to get it working.
if you have a look at, you should be able to get quite good results…
Here ia a demo running on the nano. It can be done
My first analysis takes a long time, but then it’s pretty good.
I just got it running, and i’m getting 5-6 fps. This is doing inference on 240x240 images with resnet34.
The Jetson Nano is running at 5W power mode and fanless.
I’m using the raspberry pi camera (V2) and grabbing frames using cv2. As of now, i’m saving the grabbed image as .jpg to be able to load it with the “open_image()” command. But this seems inefficient. Does anyone know how to do it without saving to disk first?