By memory do you mean RAM or the disk? In the code I shared we only use ~1% of the 50 million examples available, I don’t recall now but at resolution 128x128 that is only a couple of GB.
All these files never get loaded into RAM. They are written one by one to disk. And during training they are loaded and transformed on a per batch basis.
v1 exposes many nice ways to work with image files on disk - those are the class methods of ImageDataBunch in vision/data.py starting around line 271.
In v2 of the course I remember Jeremy saying: ‘if you have an option to change the data into a format supported by the tool of your choice, go ahead and do it, it will save you a lot of hassle vs writing your own way of interfacing with the data directly’. There is a way to use all 50 million examples by generating them on the fly without storing anything to disk, but I heeded Jeremy’s advice and took the easy way out
As a side note, I do wonder if even with a big, well tuned model it makes a difference if you train on 50 million examples vs say 2 million with data augmentation Realistically, there must be a lot of redundancy in 50_000_000 / 340 drawings of snowman!
I was able to get the accuracy up to 0.99+ and the single misclassified image is one where the insect is only shown incompletely:
ResNet34: 0.995037
ResNet50: 0.998677
These results also beat the older papers as far as I have found comparable values.
I always have to manually clean up my Whatsapp downloaded images folder because Memes and other images sit in same folder along with camera pics shared by my contatcs.
Hence i trained 34 model with 2000 images without unfreezing, 1000 manually classified images from my own Whatsapp and another 1000 sourced from google search.
I think google photos have similar model built in to remove clutter but it does not detect memes.
I am planning to have bigger dataset and testing set and see how it goes.
Playing with the Birds dataset from Caltech. It seems going to be challenging, there is 200 bird classes each a dozen pictures. With the simple 4 cycles training of ResNet-34, I got very bad results:
be careful with the CUB dataset, as it says on their website: Warning: Images in this dataset overlap with images in ImageNet. Exercise caution when using networks pretrained with ImageNet (or any network pretrained with images from Flickr) as the test set of CUB may overlap with the training set of the original network.
Potentially you are using images for your valid/test set which were previously in the ImageNet train dataset used for training the weights of the pretrained ResNet.
Nevertheless, I was also playing around with this dataset before and got similar results to the one posted by you.
I guess in this case it would be best(practice) to remove the ImageNet images from the birds dataset or pretrain with other images (and not ImageNet)?
@MicPie I didn’t fully understand the warning when looking at the training cycles results I see big losses. Isn’t this strange? should be almost perfect results if the same pictures were used to train the model.
The thing if we train without ImageNet pretrained NN it will take a long time, but will give it a try.
I have been currently working on this Happy whale dataset. I have used resnet34 model for classifying more than 3000+ whale species. The classifier that I build didn’t perform very well with only 10% accuracy. From this dataset I understand that there might be some more to it to increase it’s accuracy like applying certain specific transformation, choosing different etc. Since this is the problem of multi-class predictions I hope this will be taught later in this course on how to build multi-class prediction model in fastai which is so much easier as compared to building this notebook completely from scratch using pytorch. Below is my gist for happy whale dataset classification:
This what my model predicts separated by actual labels. These predictions seem pretty reasonable to me. Even I would have predicted the same. Not sure what to expect from model in terms of scope of improvement. Funny
Does this mean if we set size = 224 it will add extra zero padding or do reflection along the edges so as to match ? When the original image size is 48 x 48
Looking at the confusion matrix I see an interesting pattern, it basically classifies everything to the first 8 classes (look at the wide dark blues squares)
I’m trying the former and will share if I find out anything interesting although most likely I won’t.
I used your starter code, bumped up the data to 5% then decided to go ahead and train it on the complete dataset.
Fair warning: If anyone else wants to try this approach: it did take me about 2days to extract the images and will take a lot more to train the complete data. But I did make the mistake of joining a competition that’s running (Against @radek’s advice to join a fresh comp) so my money is on this idea.
That’s interesting. I did the following and ended up with the same pattern:
read further
I created my own chicken dataset using google_images_download
I trained the dataset without unfreezing; looking quite promising - only the difference between male and female for each chicken type is difficult (also due to a lot of noise in the dataset I think).
Then I unfreezed and trained all layers (with ‘learn.fit_one_cycle(2, max_lr=slice(1e-3,1e-1))’)
And ended up with the following pattern (as if one chicken type is quite generic):
WIth ‘max_lr=slice(1e-6,1e-3))’ instead, it does improve. So 1e-3 seems too high in this case. So can we state that too high learning rate will create a too generalised model? And your learning rate is probably too high?
Hey @bachir - not sure about this but I’m looking through your notebook, and it seems to me like your “labels” list and your “fnames” list aren’t in the same order.
Saying that because the first 250 labels are ‘77’ while the first flower images from your fname list are definitely not the same flower. You might be training with random labels.
’