Output_shape is wrong after model.pop using keras.applications.resnet50

I’m attempting to use the pre-trained models in keras.applications.resent50.

rn = ResNet50(include_top=False, input_shape=(224,224,3))
rn.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])

When I do a rn.summary, the last layer returns an output shape = (None, 7, 7, 2048)

BUT … when I run rn.output_shape it returns ant output shape = (None, 1, 1, 2048)

Can someone explain how to remedy this or perhaps what I’m doing wrong. (None, 1, 1, 2048) is what I would have expected if I didn’t pop() the last layer … so I’m confused as to why it is still the output_shape.

There seem to be some differences between what pop() does in sequential and functional APIs. ResNet50 uses the functional API and as said here:

"Sequential.pop() pop the last layer in model.layers and take care all the model includes output.
List.pop() (what you do here) pop the last layer in model.layers and don’t care about the original model. What we need to do is use the model.layers[-1] instead of model.output to form a new model, which is what you do here (correctly)."
Another discussion that might be of interest to you on the correct way to finetune a model with functional API is:

Thanks for the links Angel!

You got me remembering I ran into this before with the functional API. Your links eventually got me to a nice little section in the keras documentation on using the framework’s pre-trained models here.

My solution now looks like this:

base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(224,224,3))
rn = Model(inputs=base_model.input, outputs=base_model.layers[-2].output)

With this I get, rn.output_shape, rn.layers[-1].output_shape returns:
((None, 7, 7, 2048), (None, 7, 7, 2048))