Image Resizing pre-training

So I’m new to this, but I have an idea I want to play around with that would involve training on a custom made set of real estate photography, but this question can be generalized too. The images that I’m scraping come in a wide variety of sizes and I’m wondering what the optimal size to train at is (for the videos I’ve seen so far I am guessing under 512x512…and how do you handle the different sizes the photos will come in (how do you normalize their dimensions)? Cropping feels like it loses information, stretching the image feels like it distorts the information…


Transfer learning is a great place to start. Basically, you take a pretrained neural network on a lot of images and apply it to your problem.

Feel a little bit bad about plugging a blog post that I wrote but I feel it is highly relevant. The more important part is that it comes with a jupyter notebook that is a self-contained example build directly on top of keras of doing transfer learning which might be quite helpful if you are just starting out.

Nonetheless, nothing beats the explanation provided by @jeremy and the info you are after is in the first two lectures of par 1.

The simple answer is if you go for transfer learning, you need to preprocess your images just how the images were preprocessed when the original neural net that we will be reusing was trained. In the case of VGG16 (which is a great starting point) you need to resize (or crop) images to 224x224. Resizing is the way to go probably unless your target is positioned in the center / you are planning on combining outputs somehow.

And yes, resizing will distort aspect ratios… but it still works quite well and is probably not something to worry about at this point. As always, the answer is to experiment, so give it a go and you can tweak your approach along the way!

1 Like

I have images that are 480x640 pixels. What is the best way to resize to 300x300?