Effect of Image Scaling on ConvNets

Question 1

I have an image classification dataset which consists of images with varying aspect ratios. My classifier is a ResNet. As of now, I just squish the images to 225 X 225 and train the classifier. Is this going to impact the accuracy of my classifier? If so, is there something that I can do to take care of this?

One suggestion that I have come across is to include images with the same/similar aspect ratio in the same training batch. Has this been tried by anyone with success?

The original resnet paper says this:

The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [41]. A 224×224 crop is randomly sampled from an image or its horizontal flip […] The standard color augmentation in [21] is used.

Is this widely adopted when training a Resnet?

Question 2

Furthermore, my training images are of a shorter size than the test images (by a factor of 2x or sometimes even 3x). Will a model which is trained on smaller images be able to perform reasonable well on large sized images in theory?

Question 3

Also, my training images are of varying aspect ratios in addition to varying sizes as well. Would this make my CNN invariant to scale changes in the underlying image data?

Any help or insight is much appreciated.

Thanks & Regards,
Vinayak Nayak.