Lesson 1 In-Class Discussion ✅

Overfitting would mean that you validation loss gets worse as you continue training. It’s in the nature of fut_one_cycle that the loss gets worse at first, but as long as it’s going down in the end you’re fine.

One thing I noticed: when you unfreeze your model, you have to call lr_find after you call unfreeze(). In the notebook you call it before, which really doesn’t give you the information you need for choosing the learning rate.

someone please help:

ImportError Traceback (most recent call last)
in
----> 1 from fastai.vision import *
2 from fastai.metrics import error_rate

ImportError: No module named ‘fastai’

i am running GCP … i did everything as mentioned in the tutorial post about using GCP. this is such a shame that i am stuck before i can even start.
image

i feel so shitty…i already feel like giving up…

“conda list” command into my remote computer from GCP shows that fastai is installed
image
basically everything is installed and the notebook kernel (python3) can not even find fast ai module. what do i do?


i am not willing to give up, but this is very DEMOTIVATING

@gluttony47 I have also just stared trying to go through the tutorials and I am trying to use gcp as well. I have hit the same issue as you. I also have fastai installed in my conda list. I was wondering if you had heard anything and I wanted the issue to be seen again as well.

@gluttony47 I found a solution using a cobination of threads.

I used this thread to create a conda environemnt: Fastai v0.7 install issues thread

I then used this thread to actually run the notebooks. You have to activate the environment, then start a jupyter notebook port, then start another ssh connection in another bash and connect there: How to setup fastai v0.7 on gcp instance that is setup for 2019's Part 1

Based on what is taught in Lesson 1, I trained a resnet50 model to classify pictures of romanesque cathedrals vs. gothic cathedrals. I achieved an error rate of 5.1%. My notebook along with the textfiles containing the urls of the images I used for training and validation can be found here: https://github.com/g-vk/fastai-course-v3

Here is a preview of the notebook:

2 Likes

Notes of Lesson 1 with some of my additions and clarifications. Hope some will find it helpful. Feel free to leave any comments.

1 Like

Great job @gvolovskiy…I have a doubt…following the notebook lesson, I understood that the learning rate must be found before the unfreeze step. Something like that:
learn.load(‘stage-1’)
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
Pls let me know!

Before unfeezing, the set of trainable parameters is the same as when training the head of the model. Since after loading the stage-1 weights the weights of the head of the model are already trained, there is no need to train them again and hence also to look for a good learning rate. It is only after we enlarge the set of trainable parameters by invoking learn.unfreeze() that looking for a suitable learning rate becomes necessary.
I hope my explanaion was helpful for you.

1 Like

Thanks for the explanation!

**Edit:
I was not aware that the heat map are by design set to true in the plot_top_losses function.

Hi all

I’m having an issue when I try to show the top losses, the images comes out distorted
what can be the issue here ?

Thank you for sharing - it really helped us! :slight_smile:

1 Like

Glad to hear. Thanks

What do you mean by distorted? Your screenshot looks fine to me. Did you try a bigger figure size? figsize=(12,12)

For what it’s worth, you might consider switching from GCP to Salamander simply because Salamander appears to be operated by the same people who operate FastAI. Either that or it’s very closely integrated; point being that setting up FastAI with Salamander is quite easy.

Are you referring to the colored blobs that appear on each image? Those are called heatmaps and they are supposed to show you which portion of the image the neural net is “most interested” in, so to speak.

From the Fast.AI documentation on plot_top_losses():

plot_top_losses [source]

plot_top_losses ( k , largest = True , figsize = (12, 12) , heatmap : bool = None , heatmap_thresh : int = 16 , return_fig : bool = None ) → Optional [ Figure ]

When heatmap is True (by default it’s True) , Grad-CAM heatmaps (http://openaccess.thecvf.com/content_ICCV_2017/papers/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.pdf) are overlaid on each image.


Hope that helps you understand the heatmap “distortions”.

Got it, thanks a lot for the clarification

I guess I mistook the heat maps as something wrong with the pictures, thanks to @knakamura13 for pointing that out to me

1 Like

I just finished watching the Lesson 1 and I think I’ve understood the concepts well. Also, got the same working on my notebook. My question is - If I want to take one image specifically of my own and test what the output of the classifier would be, how do I go about it?