Wiki: Lesson 1

(Jeremy Howard (Admin)) #58

Congrats on the progress! Just to clarify, it scales the image’s largest dimension to 32, then center crops.




resnet34 is a PyTorch model imported from torchvision.models, so the class constructor ResNet you are referring to it’s not part of the fastai library.

For the sake of reference, you can find the source here:

If you want to use the ?? operator to obtain the source code, you should import ResNet first:

from torchvision.models import ResNet

(Anand Agrawal) #60


Is it ok to use ubuntu through virtual box instead of working with windows PC? I have a PC with windows 7 and I am getting lot of errors while running fastai


(Kevin Dewalt) #61

I’m running the library on my own ai box. In case anyone runs into the following error:

error rendering jupyter widget. widget not found model_id...

Try upgrading to ipywidgets 7.0
conda install -c anaconda ipywidgets


How to do an Ubuntu local setup for part1 v2?
(Sudarsan Padmanabhan) #62

Hi wallace,

Looks like your im attribute is none, which is that image is not defined.
Maybe you can check dimensions of the image you are training?

Shape Numpy Docs


(Nick) #63

Note that the changes outlined here: Change to how TTA() works affect this notebook.

I found changing

probs = np.mean(np.exp(log_preds),0)


probs = np.exp(log_preds)

fixed this. It would be great if someone could verify this is correct.

1 Like

(Reshama Shaikh) #65

@nickl looks like someone already made the update. maybe just needed to do a git pull of fastai repo?


(Nikhil) #66

I m playing with learning rate & sz. When I keep sz=300, it displays some iteration data and it doesn’t appear If I keep sz=224.

1 Like

(ecdrid) #67

It might happen because that might be the default sz of the images??
And when you change them, re scaling is done…

Hope I am correct…




I’m not seeing much difference with the results of all the tweaks - am I doing something wrong, or is the basic configuration already pretty good?

learn = ConvLearner.pretrained(arch, data, precompute=True), 3)
[ 0.       0.04593  0.02452  0.99219]
[ 1.       0.03632  0.0258   0.99072]
[ 2.       0.03735  0.02532  0.9917 ], 3, cycle_len=1)
[ 0.       0.0551   0.02586  0.9917 ]                         
[ 1.       0.04168  0.02625  0.99023]                         
[ 2.       0.04431  0.02597  0.99121]   

lr=np.array([1e-4,1e-3,1e-2]), 3, cycle_len=1, cycle_mult=2)
[ 0.       0.04332  0.02481  0.98975]
[ 1.       0.04008  0.02324  0.99268]
[ 2.       0.03363  0.02141  0.99316]
[ 3.       0.03325  0.02021  0.99121]
[ 4.       0.02273  0.0233   0.99072]
[ 5.       0.02452  0.02243  0.99268]
[ 6.       0.0247   0.02174  0.9917 ]

log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
accuracy(probs, y)


Thanks for the amazing lesson1!
When I run lesson1 (part1, v2), it allocates all the gpus at once, how can I use only one gpu to let my colleagues work on their projects too?


(ecdrid) #70


When we move a tensor to the GPU with .cuda() you can set the destination (which GPU) by simply passing an integer. To use the first GPU, we should use .cuda(0). The same can be done with everything else CUDA related…

It’s from the tutorials on Pytorch



(Nick) #71

@reshama I’m not seeing that update.

Are you looking at

I see the last change on December 31.

I see this commit for November 30, which fixes lesson2-image_models.ipynb and cifar10.ipynb

In lesson 1 I still see this:

1 Like

(Aless Bandrabur) #72

Regarding 3. - your learning rate schedule which still doesn’t work.

Both plot_lr() and plot(), are using the samples from the training dataset.

So this means that for plot_lr() you will have number_of_iterations = training_dataset_size / batch_size = 150/15 = 10. When I read your first graph I can see that you only have 5-6 iterations. You can double check the sizes by printing these print( and print(len(

In case of plot() method, you want to plot the learning rate against the loss. The two variables have the same length equal to the number_of_iterations. But the plot() methods cuts by default the first 10 values and the last 5 values. So if you want to use the function as it is, you will need at least 17 iterations in order to plot a 2 points line. Or you can call the function specifying to start from 0 learn.sched.plot(n_skip=0) but you will still need a minimum of 7 iterations.

Probably the best/easier would be to decrease the batch size in order to be able to display these graphs.


(ecdrid) #73

It’s like history being rewritten…
Thanks …
It prevented me to create another thread…

1 Like

(Daniel Rock) #74

try: !rm -r {PATH}train/.ipynb_checkpoints

also do this for valid.


(Jaryl) #75

Hi guys,how do i resolve this issue? The directory exists, just that there is supposed to be a slash in between Fastaipics and valid, which looks something like Fastaipics/valid… Thanks!


(Jaryl) #76

I get this when i set replace=False. What does it mean by “cannot take a larger sample than the population” and how do i work around this?




Hello everyone,

I’m trying to reproduce the first notebook on a sample of the original dogscats dataset (around 200 pictures) by following the instructions given at the end of the notebook (section called “Review: easy steps to train a world-class image classifier”), but I’m a bit confused.
I have difficulty understanding this two times procedure corresponding to the points

  1. Train last layer from precomputed activations for 1-2 epochs
  2. Train last layer with data augmentation (i.e. precompute=False) for 2-3 epochs with cycle_len=1

which I implemented with, 2)
learn.precompute = False, 3, cycle_len=1)

There are two things I don’t understand:

  1. Why is data augmentation related to precompute=False? I had the impression that these two things were independent. I thought that the precompute issue means that we have already the weights fixed for the first layers, and that the data augmentation just meant that we artificially produce more data by adding to the original pictures some modifications (rotations, croping, etc…) of them. In which way are the two related?
  2. Why do we do 3 AND 4? Is it a way to initialize the weights to some good values (in 3) and then improve them (in 4), rather than starting with some random weights?

Sorry for my naivety, I’m really a beginner.
Thanks in advance!



Another quick question about how to use lr_find():

I don’t really understand the purpose of the variable lrf in the cell with


it looks like lrf doesn’t reappear anywhere else in the code. I had the impression that the aim of this lr_find method was to be then able to do the plot with the command


and then to choose a good learning rate by looking at the plot.

Did I get something wrong?

Thanks in advance.