How to use pretrained vgg16 model for an imageset of 75x75 pixels?

As the first layer of a pretrained vgg16 network is (3,224,224), is it possible to use VGG16 model for an imageset of (3,75,75) pixels?

Hi,
I guess this is potentially for this Kaggle competition:

I didn’t work much on this challenge but you can just resize your image with cv2.resize. It will create a pixel interpolation but the data representation should still be valid. You can also just copy the 75x75 image matrix at the center of a 224x224 matrix if you want to keep the exact pixel representation.

I tried that with resnet50 and it gave a top 20% result with a single model and without ensemble strategy.

1 Like

You can use the same model! This is explained in lesson 7 and kind of blew my mind — convolutional networks don’t care about input size.

2 Likes

VGG16 supports down to 48x48 images as an input. You need to instantiate your model with
keras.applications.vgg16.VGG16(include_top=True, weights=‘imagenet’, input_shape=(75,75,3))
and then add your own model head fully convolutional or dense.

1 Like

Smart! You are right. Thanks for the apt answer. I am trying with xception model trained on imagenet dataset.

Okay. I was trying to run using the Vgg16model utilities built by Jeremy. I think your approach should be easier. Would you know in your case, which layers should be set trainable to false? As now, we are adding a different Dense layer in front of the original convolution nets.

This is true except this line should be include_top=False if you want to add your own top layers to the base VGG model.

1 Like

Sure! I copied the code from Keras docs and forgot to change!