BN loss scores increasing during training in L3


I’m not able to get the same loss scores on Cats-Dogs-Redux lesson3.ipynb as Jeremy got in his version. Here’s my version of the notebook, which is identical except for the output and path change. I used lesson2.ipynb from class to produce finetune3.h5.

My lesson 3 results begin to diverge from the class notebook when I run Jeremy’s scores go from training loss of .18 to .08, whereas mine go from .30 up to .37.

Has anyone else experienced this? I thought I saw another comment here mentioning higher loss with BN, but I can’t find it now.

I’ve gone back to the step of running the notebooks because I wasn’t getting good results from my own customized code that attempts to use BN and data augmentation. Using the “7 lines of code” and clipping, I achieved .08 loss on the test set, but with batch normalization and data augmentation, I get ~3.0.

EDIT: Nevermind about the leaderboard scores… I had flipped the min and max values during clipping. That significantly threw off my predictions and log loss scores.

I still didn’t figure out why my BN loss scores didn’t follow Jeremy’s while training, but have decided to chalk that up to notebooks which may not be an exact representation of the code someone else has run.

Also, I downloaded vgg16_bn.h5, but not vgg16_bn_conv.h5 since that doesn’t appear to be used.

edit: One small difference is I used 4,000 instead of 2,000 samples as validation, but that doesn’t explain to me why my batch norm + data augmentation results would be worse than my vanilla finetuned VGG network.