Has anyone had success running through the Fastai tutorials on any Jetson boards?

What has your experience been with Getting your Jetson setup and configured?

My experience so far, is the ARM CPU architecture does not have as much development support as the x86 architecture. So it is sometimes challenging getting the setup just right. Although I feel I’ve finally have my Nano setup to run through Lesson 1, I’m hung up at the first place I run ‘learn.fit_once_cycle(4)’ in the Lesson 1 Pet tutorial. This makes me wonder if this is an issue of compute resources as opposed to software compatibility. More on this later.

Although working through the Fastai lessons are a top priority, I also find it useful to refresh my Linux skills, and learn more about the python infrastructure stack. I have over 7 years of experience developing Java middleware services on Linux, but it has been about 4 - 5 years since I’ve been seriously hands on. I say this, because I don’t mind putting in the extra work getting my local environment setup. However, I don’t want to waste my time if this configuration is not ideal to use with the Fastai learning track.

What has your experience been with Running through Lesson 1 of the Fastai course (and beyond)?

  • The Nano has 4 gig of memory shared between the CPU and GPU
  • I installed an 8 gig Swap file (good for the CPU, but not applicable for the GPU)
  • I’m running headless, to conserve as much memory as possible

Another option is to go for the JETSON TX2 MODULE.

The gearhead in me say’s - “Go for the TX2 man!” However, I already have the Nano and I’m hoping I could make it through Fastai’s 7 lesson’s with it, before I invest more dollars in hardware.

Very interested others thoughts and perspectives on this topic!


1 Like

I have both, but I am focusing on the nano since it only requires 5V and it much smaller (portable). I’m only using it for inference and using my main machine for training.

Thanks for the reply @titantomorrow!
What is your main main machine? I’m assuming it is setup to work with the fastai course material?

A dell windows 10 machine with a standard 2060 Gpu. I just lower the batch rate. I’ve installed fast ai in a virtual environment using pip. Even the lowly 2060 has 1920 cuda cores vs the nano’s 128, so I don’t think it is wise to train on the nano.

@titantomorrow Thanks for the feedback! I disappeared from fastai forums for a bit… but I’m back now! :slight_smile:

While I contemplated the idea of building my own machine, I completed the Neural Networks and Deep Learning Coursera course. This was sufficient motivation to build my own system and get back to fastai. I’m built a Linux system with 12 core (24 thread) AMD threadriper, 32Gigs of memory, an NVIDIA RTX 2070 GPU, with room to expand in the future.

I also just completed Lesson 2 of the fastai course. Preparing to deploy a model or two via a Flask web app.

I’m having an amazing time working through fastai. Love this course! Glad I built my machine!

Jetson Xavier NX announced.

Check out the benchmark vs Nano.

Same form factor. Different price factor.

Very impressive! I looked at the previous model of the Xavier, but am focused on completing the FASTAI course, before jumping into edge computing. I’m also hoping that the deep learning libraries continue to mature on the aarch64 architecture. Thanks for sharing @digitalspecialists!

I have finished fastai course. I am able to train models using google colab and deploy the models on Windows machine with fastai installation. But inference is slow due to only CPU being available. I am looking to a minimal hardware deployment with GPU. Is it possible to install fastai_v1/v2 on jetson nano and run inference? I am not looking to run training on jetson.
I am new to this field. Excuse me if my question looks very rudimentary.

Yes. It is possible. I had it configured and working with fastai. You just need to work through the differences between x86 and aarch64 architectures when setting up.

There is also a person on the forums who actually published a video of his Nano doing inference. I bet you can find it with a search or two.