Finetuning using input image whose size is smaller than images used to pretrain ResNet50

Hi all,
I am using the pretrained ResNet50 architecture available in fastai to finetune a set of satellite images whose size is 64x64 pixels. Since ResNet50 expects images whose size is 224x224 pixels I would like to know how fastai uses the smaller images to update the model. My data block is

blocks = DataBlock(blocks = (ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=using_attr(RegexLabeller(r’(.+)_\d+.jpg$’), ‘name’))

I don’t use any resizing to match the ResNet 224x224 pixel size. My questions are:

  1. Can the fine-tuning work properly without any explicit resize ?
  2. How fastai matches the size of my images (64x64) to that of the images used to pretrain ResNet50 (224x224) ?

Thanks in advance for any suggestion.
Luigi

Hi Luigi, Jonathan’s post answers your question:

1 Like

Hi Ross,
thank you very much for your quick answer. Jonathan’s post makes perfect sense. I compared the same architecture (ResNet50) with a different set of satellite images for finetuning with a shape of 256x256 pixels and the number of parameters doesn’t change at each layer and of course the total is always the same.

1 Like