Cats and Dogs without re-training the last layer

In the python notebook, dogs_cats_redux, I didn’t find a part where we are popping the last layer of the network and retraining it. Since we are not doing that, shouldn’t we be getting the probabilities/scores for all the classes that vgg is trained for?

As per my understanding, if the subset of the dataset that you want to work on changes - cats and dogs in this case, you ought to alter the final layer (that is, change the number of neurons to two in our case) and retrain. But we are not doing the same in dogs_cats_redux notebook.

Am I missing something important? Anything that I misunderstood?

The last layer is popped but this is done inside the vgg.finetune call, which is defined inside file vgg16.py as:
def finetune(self, batches):
model = self.model
model.pop()
for layer in model.layers: layer.trainable=False
model.add(Dense(batches.nb_class, activation=‘softmax’))
self.compile()

Finetuning{popping the last layer or last 2 layers(only if the data set is very different)} improve the performance a whole lot!. You can take a look at the classical paper of VggNet(https://arxiv.org/pdf/1409.1556.pdf) to understand more about Vgg