Lesson 1 In-Class Discussion ✅

Hi bhavik07 Hope you are well!
You may find it difficult to improve your error rate as your signs are all very similar, that is they are red, all on a pole and have red or white text. I had a similar problem when I built a wristwatch classifier.

I was going to use a library called tesseract that recognizes text in pictures and combine that into my classifier, but am now working on other things.

Cheers mrfabulous1 :smiley::smiley:

Hi bhavik07
Here is a post that also talks about your issue.

Cheers mrfabulous1 :smiley::smiley:

1 Like

Thanks for pointing me in the right direction.

This is helpful. Thanks!

I’m trying to build a model to classify my WhatsApp media folder into 4 categories:
1- Greeting Images
2- People Images
3- Animals
4- Other for anything else.
My question is how to deal with the Other category ? is it something that can be defined in the model training?

thanks in advance

Hi samir.s.omer Hope your having a marvelous day!

Below are some links that are discussing your issue!
It doesn’t seem an easy thing to resolve.

I also saw this model which I haven’t played with yet but maybe combing your model with something like this may help. (I thought this model was fantastic! you have to watch the video :+1:)
muellerzrZachary Mueller Regular

Dec '19

It was implemented in fastaiv1 here https://github.com/fg91/Neural-Image-Caption-Generation-Tutorial

Hope this helps.
Cheers mrfabulous1 :smiley::smiley:


This is a super belated reply, but that isn’t entirely correct in the context of the lecture where the normalization being talked about wasn’t a simple scaling of 0-255 to 0-1 by dividing by 255 (which as you pointed out, results in no information loss), but rather, normalizing each channel to 0 mean and unit variance. If you actually performed this on each color channel, you would subtract by the mean for each color channel and divide by the standard deviation, but you do not then log those mean and stddev values, so you are losing information. As an extreme example, if you had a perfectly red image (each pixel is 255,0,0 in RGB space), performing the zero mean normalization would result in all channels having a value of 0. Thus you have lost relative information between the color channels, namely, that the red color channel was very strong.

I suspect that, per channel normalization was not intended. You pretty much always perform this normalization across all the color channels so that you still maintain relative information between the channels.

Hi All

Does anyone have any idea of why my recorder plot looks like this?

It appears that essentially I essentially have two loss value for one learning rate…! I appreciate any comment on this.

Thanks in advance.

This is the plot after lr_find()?
It looks like it increases lr up to 1e-04 and then decreases it again down to 1e-05. Could you send few lines before that one?

Hi samir.s.omer hope your having lots of fun today!

I have been following A walk with fastai2 - Study Group and Online Lectures Megathread being run buy muellerzr and we covered a little bit about dealing with classes that your classifier hasn’t been trained on.

If you look at this notebook you will see a way of using multilable classification to recognize images that your classifier has not been trained on.

It looked good to me.

Cheers mrfabulous1 :smiley: :smiley:

Yes it is the recorder plot of the resnet34. I was exactly following the course but tried to do some experiments with the resent34. here are some other screenshots.

Oxford-IIIT Pet Dataset is downloaded very slowly. The server I am using is easyaiforum, which I suppose is in China.

The solution is here:

My “throw wet toliet paper at the wall” first try at ML!

I briefly looked at the Fast.AI course last year, real life got in the way, but now I’m back. Going to actually finish it this time.

So I got my Paperspace account running, ran through the first lesson notebook, watched the video, made my own notes etc. Challenged myself to make my own classifier with my own data.

The simplest and quickest thing I could think of was to use the ‘People’ feature in iPhotos to select pictures of my family. I just trusted Apple’s deep learning was correct, (it isn’t) exported the files, made a tar file, then used the ‘Upload’ button on my noteboook. This took about 3 hours with a 330Mb file, so if anyone knows a better way…

I duplicated the lesson 1 book again, then deleted some of the notes

The results are hopeless, as expected, although better than chance, ( a 0.66 error rate?) with a rate of 0.44 ish

It all seemed pretty straightforward really. Fun if not particularly useful.
Even this daft example seems to be able to recognize my daughter apart from her parents. I won’t be submitting this to Kaggle anytime soon, I found it help me understand what was going on much better than just watching the video and clicker Shift-Enter a few times :wink:


@jeremy sir have to save the model in .pt format… i want to build an android app…but can’t do it with .pth format…so please any help??

Hi masudbarki Hope all is well and you are having a jolly day!

I suggest you search this forum using “android” and do some research as many people have asked or researched this question before. It is likely you will have to create .pkl file, convert your .pkl file using ONYX and use TensorFlow before using it Android. This is just one approach.

Cheers mrfabulous1 :smiley: :smiley:

1 Like

Ok sir

Hi masudbarki I saw this link in my travels just now it may be useful.

It may have useful ideas!

Cheers mrfabulous! :smiley: :smiley:

1 Like

I was going to develop a classifier for android using flutter… thanks sir… i converted it to .pt with this tutorial…

1 Like

In <<Lesson 1 - Image Recognition>>, I find "In part 1 of the course, we are always going to be making images square shapes. Part 2, we will learn how to use rectangles as well. ”
Would you please tell me where I can find it? Because the image that I have are all rectangles 1100 * 800.
Thank you!