Resizing of images for image classification

Generally, while doing image classification we chop the image with the longer side to make it a square and then rescale it to our needed dimensions. But, in doing that sometimes we lose some information. Like if the object we are trying to detect is in the corner of the image, then it will be cropped.

What if we resize the image to our required dimensions without cutting the image from the longer side. I know it will make the objects in the image look weirder because of the changed ratios. Can we use this approach for image classification task?

Jeremy discusses in the lectures that stretching still seems to work pretty well. I think by default that is what fast.ai does for image processing.

You should be able to see examples of your batches for your images in the code. Some good examples of intentional stretchiness in the “lesson6-pets-more” notebook.

Okay. I thought that fast.ai clipped the images while processing as explained in this [thread].(Understanding how image size is handled in fast.ai)