Lesson 1 In-Class Discussion ✅

Awesome Lecture! Kudos to fast.ai team!:grin:

1 Like

So I understand that I need a GPU to effectively train models, but how about evaluating a trained model against a new image? Is it possible to do that using CPU only?

I ask because I would love to be able to deploy a trained fastai/pytorch model to a much cheaper hosting provider in a way that allowed me to classify new images without paying for a GPU.

1 Like

installation steps link version 1 please thanks in advance

Re-watchable. Just please don’t share it outside this group.


yes, once the model has been trained, it is possible to run it cpu only.

Yes, the confusion matrix is for the validation set. Typically in a dataset you’ll be given a set of data w/ labels. You split this data w/ labels into a training set and a validation set. The validation set helps you test if your model is over fitting. The test set (for example in a Kaggle Competition) is a set of data that you do not have the labels. You wouldn’t be able to produce a confusion matrix on data where you do not know the actual labels.

1 Like

max_lr=slice(1e-6, 1e-4)

In this, the 1e-4 is the learning rate of the last layer of the Net or last layer of the freezed ones?


or via docs:

1 Like

This will show you the default.

1 Like

@rachel would it be possible to get an answer on that ? It’s really puzzling me :confused: You asked Jeremy but as my question wasn’t very clear he didn’t really understood what I meant and just said that we’ll talk more about plot_top_loss in the next lecture.

I’ll dig into the code and get you an answer shortly

1 Like

Awesome session! Now back to bed, as my alarm goes off in 3 hours. ;D



Is the guide to download data from Google Images referred by Jeremy in the lecture available now?


Thanks ! I’m afraid I’m not very comfortable with diving into the source code yet

how many images do i need to build my own classifier? 50?100?

You’ll get there if you stick with it!


Installation trouble: TL/DR: ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

Full account:
I had CUDA 9.1 and CUDNN 7.0 working fine with PyTorch on my Ubuntu 16.04 system, and have run with these before. But tonight when I installed Fast.ai, it seemed it wasn’t using the GPU :-(.
Ok, so I see need CUDA 9.2 and CUDNN 7.1, so I upgraded to them by following these instructions.

Then rebooted, then ran

conda install -c pytorch pytorch-nightly cuda92
conda install -c fastai torchvision-nightly
conda install -c fastai fastai

…but when I try to do

>>> from fastai import *

I get

ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

Running updatedb and then locate libcuda.so returns


i.e., no libcuda.so.1 anywhere on the machine. Even though I ran the CUDA installer. And the nvidia-396 nvidia-modprobe apt-get packaged.

I found a PyTorch Forums thread where this error appears related to this, but it’s for the non-GPU version and I’m trying to use the GPU version.

Tried conda uninstall the above packages, and then reinstalling them. No good. I’m out of ideas. Does anyone have a suggestion? Thanks.

PS- nvcc works, nvidia-smi shows both my cards as working.

EDIT: Just noticed Fast.ai GitHub says one doesn’t need a separate install of CUDA, that it comes packaged with PyTorch now. Well, in that case the nightly PyTorch is indeed not using the GPU, despite my grabbing the GPU version.


Some of that is going to just depend on how close your images are to the problem that was already solved. The more different the problem, the more images you will need. Try it with however many images you have and if you need to get more images, you will have the pipeline to feed more images in easily.

With a lot of problems though, you should be able to build something decent with 50-100 images. The more you have, the better it would be

i have same question ,Jeremy talked about mobile devices and he said clean way is to keep models over on server and get predictions from them.Is it because even getting prediction may require gpu?

I’ve been able to run inference on images on a free Heroku instance, so it’s totally doable. Pytorch even makes a CPU-only wheel (Python package) to keep your bundle size low.