Dog breed identification

Hi, I started fast.ai classes 2 weeks ago and this is very entertaining.

I tried to enter the dog breed identification challenge (https://www.kaggle.com/c/dog-breed-identification) using the vgg model as used in lesson1. The data set contains a total of 10222 pictures divided in 120 categories. I used 10% of the data to make the validation set. To be sure that there is at leas one picture of each breeds in the validation set I explicitly took 10% of each category to make the validation set.

The thing is that my model doesn’t seem able to fit well to those data.
After the first epoch results are the following: loss: 12.2193 - acc: 0.2138 - val_loss: 11.9785 - val_acc: 0.2401
After 20 epochs the results are not so much better: loss: 11.0408 - acc: 0.3108 - val_loss: 11.5460 - val_acc: 0.2807

I checked randomly some folders of the train and validation set and the data seems well classified.
Those pictures are from imagenet so I am curious about what could make my model fail as it is based on VGG16.

The average number of pictures by category is less than 100, is this possible that there is not enough data to train the model ?

Did anyone tried to enter this competition using the method of lesson1 and got such bad results ?

Thanks in advance.

I tried to train the model on fewer random classes to see and that’s what I got (I trained the model for 5 epochs each time)

5 breeds: loss: 0.0835 - acc: 0.9948 - val_loss: 6.0444e-04 - val_acc: 1.0000

10 breeds: loss: 0.5175 - acc: 0.9492 - val_loss: 1.0107 - val_acc: 0.8889

25 breeds:loss: 4.3907 - acc: 0.7077 - val_loss: 4.3117 - val_acc: 0.6944

50 breeds: loss: 5.9251 - acc: 0.6154 - val_loss: 6.6222 - val_acc: 0.5763

It guess that it makes sense that the more classes the less the model can learn efficiently. But this vgg model won the imagenet challenge in which there is one thousand classes. I tried to increase the number of epochs but it doesn’t seem to help so much.

Hi,

I also tried the same, got to 75% accuracy and 0.8 loss with a lot of drop-out. I think the problem is indeed the amount of data. Although data augmentation didn’t get me far either.

See kernel: https://www.kaggle.com/hanspinckaers/fine-tuning-vgg16-with-drop-out-loss-0-8

Doesn’t run on Kaggle yet, sorry, will fix that later.

I think the top of the leaderboard mainly uses/finetunes bigger networks (inception etc) with ensembling.

Thanks for your answer !
I tried to enter the competition after the first lesson and didn’t have any knowledge about regularization and others techniques introduced in lesson 3. I will try to improve it with theses.
I lesson 7, an other kind of neural CNN is introduced (resnet), maybe this would give even better results, I will try it when I will reach that point.