I see that for resnet 34 image size used by Jeremy in the lesson is 224 and for resnet 50 is 229.
I think it is the value given by the structure of the architecture.
If that is correct:
What are the optimal sizes for other resnets like 101 etc?
If my image has a higher resolution, does fastai resizes it (I think so, as the code never complained about image size during training…)
The higher the resolution the better accuracy but also more memory needed. Fastai will resize it to whatever size you specify creating data for example ImageDataBunch
I agree that the image size can affect the memory used.
However, since CNNs (and also Resnets) identify features based on convolutional kernels applied on pixels, my belief is that the resolution of the pictures for which the resnet has been originally trained when I want to apply transfer lerning (i.e. the images of Imagenet) is important to identify features.
For example, the nose of a dog can appear as two black pixels surrounded by some black on a low resolution image or like two very large black circles surrounded by leathery and shiny tissue for a very large resolution image.
So my guess is that each resnet has a size of the first layer (the one nearest to the image) that is of standard size and upon which the first convolution start. And therefore providing images of a resolution higher than this layer (i mean, of size higher than this layer) is not useful