Hi GregFet: May you share the link on Kaggle to download the satellite dataset? Thanks a lot.
So in the lesson Jeremy mentioned that the validation images are center cropped to be square. But are the training images also cropped that way? It seems to me it would be the case, but that’s not clear.
Hi all, i tried building an image classifier based the steps outlined in the lesson 1 notebook.
I wrote my first medium post based on the results i had, please check it out and let me know your thoughts
I was doing the dog breeds identification in AWS . I was getting Nan in the training & validation loss while training with differential learning rates.
attached the image
I was using learning rate = 0.2
May I know why it happens.
I have the same question. How is the model handling variable input sizes?
My notebook is similar to Jeremy’s dogbreeds notebook shown in the video except the below:
learning rate: Mine Jeremy
Arch : resnet34 resent50
Later when I changed the learning rate to 1e-2 the Nan’s observed in the training/validation loss disappeared.
Is it that Nan’s appear when the learning rate is much higher as in this case 0.2 or 0.1 ?
The lr_finder was showing that lr of 0.1 would be a good choice…
Since no one answered, I’d add what I’ve found in case it’s helpful to someone: the code is buried deep in the transforms.py of the fast.ai library. By the code, the cropping of the validation images depend on the cropping of the training images. If the training images are cropped in the “random” or “googlenet” way, then the validation images are centre-cropped; otherwise, the validation images are cropped the same way as the training images.
Guys can anyone tell where to find all trained model weights by jeremy in lectures as i don’t have sufficient computational resources to train which is demotivating as i am not able to see results on my pc of the models which we are making.