In the lesson 1 notebook, Jeremy does not convert the incoming images to BGR, nor does he subtract the mean from the images when he uses Keras’ pre-built VGG. Later, however, when we reconstruct it, we include the lambda layer to convert the images and subtract the mean from the image data.
The Keras code for VGG, does neither of these steps. Yet the VGG example in lesson 1 works swimmingly. Does anyone have any idea why? Am I missing something here?
OK, so the Vgg16 function we’re using in the notebooks isn’t the Keras one - it’s one Jeremy made adding back in Dropout to VGG and taking care of the preprocessing.
So on the built in Keras, we’d have to pre-process the images prior to passing into the net?
vgg16 = VGG16()
vgg16.layers.pop()
vgg16.layers.pop()
for layer in vgg16.layers: layer.trainable=False
m = Dense(4096)(vgg16.layers[-1].output)
m = Dense(25,activation='softmax')(m)
vgg16 = Model(vgg16.input, m)
vgg16.compile(optimizer="adam",loss="categorical_crossentropy")
It’s unclear in your example - is that the Keras vgg or Jeremy’s VGG? It seems to me for the Keras pre-built vgg the answer is you MUST pre-process your images (RGB->BGR + mean subtraction) prior to using - I just can’t find anything about it but there’s nothing in the codebase for that model to suggest its done for you.
IIRC this switches channels by default. BTW do you know if this can be used with a generator? I’m thinking we should still be good to use it with the Lambda layer
Great tip, thanks for the information. One cool think I just realized I could do is use the ?? on my variables which helped me see the different commands I have available. Thanks again for your help on this.