I have two ideas for adding dropout to Keras’ VGG16 model.
My first idea is adding dropout to the CNN layers, and leaving the pretrained weights in place. This would give more variation to the fully connected layers and reduce overfitting I believe (I would love for somebody to tell me if I’m wrong on this).
My second idea is adding dropout before the two Dense 4096 layers. This would be easier and may give me similar results as the previous solutions. Mostly what I am looking for here is confirmation that both of these are valid with leaving the pretrained weights in place and fitting using dropout. Would one of these be considered better than the other? Is there a better way?