Fastai inference runs very slow on Nvidia Jetson Nano

(Anshul Khare) #1

I am running a Fastai model (image classification) on Jetson nano and it takes 12-15 seconds to predict for each frame. The same model takes less than a second on my macbook air (4gb, 1.6 ghz).

Is there a way to optimize Jetson Nano for fast inference with Fastai?

I used @Interogativ instructions to install fastai on Jetson Nano (Share your work here ✅)

Any pointers would be greatly appreciated.

1 Like

(Graham Chow) #2


You could use FastAI just to do the training and TensorRT to do the inference on the nano. I did this with the pets from lesson 1 (resnet 50 299x299 images) and found it took ~100ms to do the inference (in C++). This far worse than what should be expected, but at this stage I’m just trying to get it working.

if you have a look at, you should be able to get quite good results…

Here ia a demo running on the nano. It can be done :slight_smile: