Dealing with large images (800 X 1400)

I have a dataset in which all the images are near about 800 X 1400. Now, when I call tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1), does it resize the image to sz X sz? Is that enough to deal with large images?

I am asking because in lesson 2 (dog breeds section), @Jeremy mentions that large/small images have to be dealt with in a special manner. At the same time, I have also read in a couple of forum posts that modern CNN architectures are size agnostic. Please guide me on what the right way is.

Currently, using the differential learning rates, I am able to achieve around 83% validation accuracy.

as per usual, it completely depends on your problem.

a recent competition on kaggle where you were asked to segment cars from their background was won by people who fed in the images in full HD resolution. you need quite some hardware to do such a thing and even then you are dealing with batch sizes around 4.

for classifications the common ground on imagenet is something like 224 pixel width. everything that is larger is simply resized.

i suggest you start with a smaller resolution and if the accuracy isn’t sufficient you use larger images.