I've been looking at Lesson 7 and, in particular, generating the output from the convolution layers and I am confused by the fact that you have many different files for the weights.
If I look at vgg16.py, when the model is built we load the weights 'vgg16.h5' while in vgg16bn.py, if there is no top you we load 'vgg16_bn_conv.h5' and with the top we load 'vgg16_bn.h5'. I can understand that 'vgg16_bn.h5' should be different from 'vgg16.h5' for the dense layers due to the addition of batch normalization but since the convolution layers don't change, I would expect 'vgg16_bn_conv.h5' to be identical to 'vgg16.h5' without the top.
Also in lesson 7, to get the output of the convolution layers, you are using the VGG16BN model but before splitting off the convolution layers, the model is trained for 3 epochs and the weights are saved. These weights are reloaded just before the splitting but given that in the fine tuning process all of the layers are frozen except for the decision layer that would seem to me to be a step that would have no effect. Can you explain why you reloaded the weights?
Basically, my question is if you are only interested in getting the output of the convolution layers, whether you use the original VGG16 model or the VGG16BN model should not matter shouldn't it? Are they not identical up until we get to the fully connected layers?