Lesson 1 In-Class Discussion ✅

did not find the reference of imagenet_Stats

You can use r to return from the method or function.
Or put a breakpoint in the required file. Then press c to reach break point.

can any one paste lesson1 class discussion chat link

Hi,

I’m also having the same issue with the padding_mode needs to be ‘zeros’ or ‘border’ but got reflection.
Please suggest how to fix this issue.

I am running the code on windows 10
torchvision-nightly-0.2.1
torchvision 0.2.1
torch 0.4.1

Thanks,
Ritika

Hello ,
I keep getting this error when I run learn.fit_one_cycle(4).
I’m not sure if it has to do with the images I’m using or if it could be something else I’m not aware of. Any help is appreciated.
Thanks

Hi, please see the FAQ thread. @Descobar14 your question is also in the FAQ.

3 Likes

In lesson 1 notebook, when we plot the confusion matrix:

interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()

Is it using a hold-out set to do it ? Is this set different from the set used to train the model ?

Thanks!

Which platform are you using ?

Thanks! I kept looking for the method in the learn object - didn’t expect it to be a part of the img!

For resnet, the typical normalization is mean subtraction; no division by std. The mean for ImageNet dataset is

mean= [103.939, 116.779, 123.68]

(BGR channels)

@tsail good question. When we do one pass we learn which direction to adjust the weights in (up or down) based on the data we have seen and the labels we are trying to tune the network to recognise. The problem is we do not know by how much to adjust the weights. The learning rate represents the size of the adjustment to the weights (i.e. we multiply the weights by the learning rate). A small learning rate makes smaller adjustments to the weights and needs more iterations over the data (epochs) to get to an optimal point. The caveat is the learner can get trapped at various points but let’s not discuss that now as it could lead to confusion. A large learning rate will adjust the weights more aggressively. The next question is why dont we just use large learning rates? If we use a large learning rate we can overshoot the optimal point we are trying to narrow in on and because its large we end up bouncing back and forth without ever narrowing in on the optimal point. So it is common to start with a large learning rate and then gradually decrease it. However this is just one method. There are many methods (automatic and manual) to adjust the learning rate.

4 Likes

Google colab

Hi, Did you fix this error?

Hi, No.

I am working on Colab now and am uploading the data to my google drive and working on it.

Ah ok. AWS is my platform and I have my data in S3. I am getting the same error - KeyError: ‘content-length’.
Thanks.

I did not check, but I am pretty confident that this is pytorch.

Thanks for your explanation @maral!

I encounter the same issue with different kaggle dataset. Did you fix this?

Has anyone tried using mnist_stats declared in fastai/vision/ data.py

When I try data.normalize(mnist_stats) I get the error mnist_stats not defined. I proceeded with declaring that in my notebook but maybe data.py needs to be updated? (the __all__ part)

Will be in the next release.

1 Like