Question about image size in training

I have a small set of images with small size (111x111), so I red them all into the memory as a numpy array and using tfms_from_stats to get transformation. Using a MatchedArraysIndexDataset to get datasets by ImageData.get_ds. See the code below:

tfms = tfms_from_stats(stats, sz, crop_type=CropType.NO, tfm_y=TfmY, aug_tfms=aug_tfms)
datasets = ImageData.get_ds(MatchedArraysIndexDataset, (trn_x,trn_y), (val_x,val_y), tfms=tfms)
md = ImageClassifierData(PATH, datasets, bs=bs, num_workers=nw, classes=all_labels)
learn = ConvLearner.pretrained(f, md, precompute=True, ps=0.5) 

Here I set sz=128, f is resnet34. The size of my original training images is 111x111. When I use next(iter(md.trn_dl)), I can see the image was resized to 128x128, however, when I use learn.summary(), the input size of the first layer is still 111x111.

I can do training, but I am not sure it’s training on 128x128 or 111x111. Could anybody clarify this for me?
Also, technically does it matter to change the size to 128 (something like 64, 128, 256, 512, 1024) ?