In Statefarm notebook, convolution features are precalculated, using VGG pretrained weights, and saved to disk for convenience. These convolution features are then used to feed into a model of dense layers for classification.
Would this be the same as combining the conv layers + dense layers, and freezing the conv layers by setting
trainable = False ?
My motivation for this is that I run out of memory when using large number of data augmentation and concatenating the conv features weights.
Would combining the conv layers and dense layers into a single model, and feeding in training data in batches help?
Or would the training epochs be recalculating the conv features at every epoch?