Generally, while doing image classification we chop the image with the longer side to make it a square and then rescale it to our needed dimensions. But, in doing that sometimes we lose some information. Like if the object we are trying to detect is in the corner of the image, then it will be cropped.
What if we resize the image to our required dimensions without cutting the image from the longer side. I know it will make the objects in the image look weirder because of the changed ratios. Can we use this approach for image classification task?