Lesson 1 Discussion ✅


(jaideep v) #1158

did not find the reference of imagenet_Stats


(Nikhil) #1159

You can use r to return from the method or function.
Or put a breakpoint in the required file. Then press c to reach break point.


(jaideep v) #1160

can any one paste lesson1 class discussion chat link


(ritika) #1161

Hi,

I’m also having the same issue with the padding_mode needs to be ‘zeros’ or ‘border’ but got reflection.
Please suggest how to fix this issue.

I am running the code on windows 10
torchvision-nightly-0.2.1
torchvision 0.2.1
torch 0.4.1

Thanks,
Ritika


(Daniel) #1162

Hello ,
I keep getting this error when I run learn.fit_one_cycle(4).
I’m not sure if it has to do with the images I’m using or if it could be something else I’m not aware of. Any help is appreciated.
Thanks


(Francisco Ingham) #1163

Hi, please see the FAQ thread. @Descobar14 your question is also in the FAQ.


(Harold) #1164

In lesson 1 notebook, when we plot the confusion matrix:

interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()

Is it using a hold-out set to do it ? Is this set different from the set used to train the model ?

Thanks!


(Harold) #1165

Which platform are you using ?


(Jennifer Liu) #1166

Thanks! I kept looking for the method in the learn object - didn’t expect it to be a part of the img!


(Satish Kottapalli) #1167

For resnet, the typical normalization is mean subtraction; no division by std. The mean for ImageNet dataset is

mean= [103.939, 116.779, 123.68]

(BGR channels)


#1168

@tsail good question. When we do one pass we learn which direction to adjust the weights in (up or down) based on the data we have seen and the labels we are trying to tune the network to recognise. The problem is we do not know by how much to adjust the weights. The learning rate represents the size of the adjustment to the weights (i.e. we multiply the weights by the learning rate). A small learning rate makes smaller adjustments to the weights and needs more iterations over the data (epochs) to get to an optimal point. The caveat is the learner can get trapped at various points but let’s not discuss that now as it could lead to confusion. A large learning rate will adjust the weights more aggressively. The next question is why dont we just use large learning rates? If we use a large learning rate we can overshoot the optimal point we are trying to narrow in on and because its large we end up bouncing back and forth without ever narrowing in on the optimal point. So it is common to start with a large learning rate and then gradually decrease it. However this is just one method. There are many methods (automatic and manual) to adjust the learning rate.


(Daniel) #1169

Google colab


(Amulya) #1170

Hi, Did you fix this error?


(Bhuvana Kundumani) #1171

Hi, No.

I am working on Colab now and am uploading the data to my google drive and working on it.


(Amulya) #1173

Ah ok. AWS is my platform and I have my data in S3. I am getting the same error - KeyError: ‘content-length’.
Thanks.


#1174

I did not check, but I am pretty confident that this is pytorch.


(Larry) #1175

Thanks for your explanation @maral!


(Amulya) #1176

I encounter the same issue with different kaggle dataset. Did you fix this?


(Akshay) #1177

Has anyone tried using mnist_stats declared in fastai/vision/ data.py

When I try data.normalize(mnist_stats) I get the error mnist_stats not defined. I proceeded with declaring that in my notebook but maybe data.py needs to be updated? (the __all__ part)


(Jeremy Howard (Admin)) #1178

Will be in the next release.