Lesson 1 In-Class Discussion ✅

  1. Apart from the accuracy of the model, I don’t think you will have any issues with using the resnet34 model.
    I did a cursory search for competition notebooks that used fast ai.
    Here is a kaggle notebook using the resnet 50
    and here is one that usesresnet 18

  2. Assuming you remove the class ‘new_whale’, you will have all real samples of new whales spread out and misidentified as one of the other 3k/4k+ whales. I can’t immediately tell how the accuracy of your model will change based on this decision, but I suppose you can experiment and make a decision.

I believe the train_loss is the value of the loss function associated with the training data set and the validation_loss is the value of the loss function associated with the validation data set. Not exactly sure about the error_rate, other than the obvious (… rate at which the model is wrong i.e. # of wrong predictions / number of data points (i.e. number of images). I’m a newbie, so someone else please chime in!

1 Like

Thanks Konstantin

image

image

Thank you aquietlife! You’re a quiet lifesaver :laughing:

1 Like

I completed ‘lesson 1’, and worked on creating my own dataset. This is to predict the road-signs. It should’ve been simple but for some reason I’m not able minimize the error_rate. Can someone please help me with it?

Absolutely! Happy it helped :slight_smile:

Hi bhavik07 Hope you are well!
You may find it difficult to improve your error rate as your signs are all very similar, that is they are red, all on a pole and have red or white text. I had a similar problem when I built a wristwatch classifier.

I was going to use a library called tesseract that recognizes text in pictures and combine that into my classifier, but am now working on other things.

Cheers mrfabulous1 :smiley::smiley:

Hi bhavik07
Here is a post that also talks about your issue.

Cheers mrfabulous1 :smiley::smiley:

1 Like

Thanks for pointing me in the right direction.

This is helpful. Thanks!

Hi
I’m trying to build a model to classify my WhatsApp media folder into 4 categories:
1- Greeting Images
2- People Images
3- Animals
4- Other for anything else.
My question is how to deal with the Other category ? is it something that can be defined in the model training?

thanks in advance

Hi samir.s.omer Hope your having a marvelous day!

Below are some links that are discussing your issue!
It doesn’t seem an easy thing to resolve.

I also saw this model which I haven’t played with yet but maybe combing your model with something like this may help. (I thought this model was fantastic! you have to watch the video :+1:)
muellerzrZachary Mueller Regular

Dec '19

It was implemented in fastaiv1 here https://github.com/fg91/Neural-Image-Caption-Generation-Tutorial

Hope this helps.
Cheers mrfabulous1 :smiley::smiley:

2 Likes

This is a super belated reply, but that isn’t entirely correct in the context of the lecture where the normalization being talked about wasn’t a simple scaling of 0-255 to 0-1 by dividing by 255 (which as you pointed out, results in no information loss), but rather, normalizing each channel to 0 mean and unit variance. If you actually performed this on each color channel, you would subtract by the mean for each color channel and divide by the standard deviation, but you do not then log those mean and stddev values, so you are losing information. As an extreme example, if you had a perfectly red image (each pixel is 255,0,0 in RGB space), performing the zero mean normalization would result in all channels having a value of 0. Thus you have lost relative information between the color channels, namely, that the red color channel was very strong.

I suspect that, per channel normalization was not intended. You pretty much always perform this normalization across all the color channels so that you still maintain relative information between the channels.

Hi All

Does anyone have any idea of why my recorder plot looks like this?

It appears that essentially I essentially have two loss value for one learning rate…! I appreciate any comment on this.

Thanks in advance.

This is the plot after lr_find()?
It looks like it increases lr up to 1e-04 and then decreases it again down to 1e-05. Could you send few lines before that one?

Hi samir.s.omer hope your having lots of fun today!

I have been following A walk with fastai2 - Study Group and Online Lectures Megathread being run buy muellerzr and we covered a little bit about dealing with classes that your classifier hasn’t been trained on.

If you look at this notebook you will see a way of using multilable classification to recognize images that your classifier has not been trained on.

It looked good to me.

Cheers mrfabulous1 :smiley: :smiley:

Yes it is the recorder plot of the resnet34. I was exactly following the course but tried to do some experiments with the resent34. here are some other screenshots.

Oxford-IIIT Pet Dataset is downloaded very slowly. The server I am using is easyaiforum, which I suppose is in China.

The solution is here:
https://bbs.easyaiforum.cn/thread-1613-1-1.html