This is my first post. I just finished Lesson 1 and in one moment Jeremy mentions how some students downloaded 2x10 images of other things and made predictions for those things.
I created a folder /vehicles next to /dogcats and in there I have /vehicles/train/cars/[list] /vehicles/train/bikes/[list] and then /vehicles/valid/cars/[list] /vehicles/valid/bikes/[list].
Train folders each have 10 images of cars and 10 images of bikes and in VALID folder I have 4 images of cars and 4 images of bikes.
My prediction is mixed up and the percentage is about 10%.
I am familiar with concepts of cross validation sets but I cannot seem to apply in this particular case.
How can I retrain the model to recognize cars/bikes ?
Based on what you stated the only thought I have is: Have you tried preprocessing the images using tfms (stands for transformations.) tfms_from_model takes care of resizing, image cropping, initial normalization (creating data with (mean,stdev) of (0,1)), and more.
@BSCowboy I just tried that. Thank you for chiming in. That was helpful.
I was actually wrong before. My model is overfitting and I need to find out why.
epoch trn_loss val_loss accuracy
0 0.019177 0.013557 1.0
1 0.037125 0.013393 1.0
2 0.146108 0.074173 1.0
3 0.117849 0.08447 1.0
4 0.103263 0.076232 1.0
5 0.087171 0.081884 1.0
6 0.08757 0.1097 1.0
7 0.080142 0.099681 1.0
8 0.075022 0.089879 1.0
9 0.066707 0.080698 1.0
It is actually giving 100% and both loss results are going down.
And that was my initial doubt that having 10 images in the VALID folder isn’t enough. ( I don’t have a test data like 10 000 cars to train on.) Do I need that in this case ?
Again I am confused by Jeremy where he said that we can download a few images off the Google and place them in the folders and it will correctly classify (As in his example, one student did that with 10 images of USD currency and 10 images of CAD currency and it correctly classified it. He did not mention that he uploaded thousands of currency images for training).
Thank you once again for your previous answer. I appreciate the time you take to write your comment
Hmmm, I am not sure. Have you watched the second video? It is still part of lesson one. Maybe you should mess with the size of the image or reduce the number of images in the training set to six or seven.
It’s really about playing around and developing hypothesis about what could be happening. I think the most important thing is to make one change, see what happens, then change what you changed back and try to change something new. You can’t really do any harm, so why not try to systematically change everything and discover what each change does.
Thanks man. I really appreciate your comments!
Have you figured it out? If not, when you do I am curious to hear what you find.
I ended up downloading 2 x 30 images of cars/bikes and placed in the formentioned folders. They used another 2x10 for validation. It actually predicted it well.
Obviously we can always use more images in training to prevent overfitting but it did really good for the validation after only 30 images.