Fastai inference runs very slow on Nvidia Jetson Nano

I am running a Fastai model (image classification) on Jetson nano and it takes 12-15 seconds to predict for each frame. The same model takes less than a second on my macbook air (4gb, 1.6 ghz).

Is there a way to optimize Jetson Nano for fast inference with Fastai?

I used @Interogativ instructions to install fastai on Jetson Nano (Share your work here ✅)

Any pointers would be greatly appreciated.

1 Like

Hi,

You could use FastAI just to do the training and TensorRT to do the inference on the nano. I did this with the pets from lesson 1 (resnet 50 299x299 images) and found it took ~100ms to do the inference (in C++). This far worse than what should be expected, but at this stage I’m just trying to get it working.

if you have a look at, you should be able to get quite good results…

Here ia a demo running on the nano. It can be done :slight_smile:

My first analysis takes a long time, but then it’s pretty good.

I just got it running, and i’m getting 5-6 fps. This is doing inference on 240x240 images with resnet34.

The Jetson Nano is running at 5W power mode and fanless.

I’m using the raspberry pi camera (V2) and grabbing frames using cv2. As of now, i’m saving the grabbed image as .jpg to be able to load it with the “open_image()” command. But this seems inefficient. Does anyone know how to do it without saving to disk first?

Did you ever figure out how to get this to work at faster speeds?

Update here: Analyzing frames from OpenCV cv2.VideoCapture()