Resnet with non standard size input image

I have been using fastai library for good about 8 months. However of late I am faced with a question on how the Resnet handles non standard size input images. For example, resnet-18 assumes image size of 224. However when we create databunch we provide the image size. In my case the images are 200x80 and 1024x250. As I dont want to lose information due to cropping I use size=(200,80) and size=(1024,250) respectively while creating databunch. Does the learner resize the images to standard size of 224x224. If yes then what explains better accuracy with higher resolution images?