I’m around lesson 5 now and following Jeremy’s advice of trying to reach the top 50% in the state farm competition.
I used the VGG16 model with batch normalization and trained it to a validation accuracy of 0.96 (approx, I don’t remember exactly)
I thought this was pretty good so I submitted the results to kaggle, but I got a score of 2.0 even after applying clipping
This is among the worst 10% percentile, so a pretty bad position
I’ve been checking everything this morning but can’t see what’s wrong. Could it be that I am getting a good accuracy with the training/validation set, but bad accuracy with the test set?
How would you test for this kind of thing?
PS: I split the train data in 4500 validation images and around 16000 training images