When creating an image data bunch like so ImageDataBunch.from_folder(image_path, train='.', valid_pct=0.2, seed=66, ds_tfms=get_transforms(), size=224).normalize(imagenet_stats)
there is a size
parameter.
I assume as I can see it when reading the docs that this resizes and crops images to a 224x224
image.
Using this assumption I guess it centre crops the image.
My question is 3 fold:
- Are the two assumptions above correct?
- What happens when it encounters an image lower in dimensions such as
120x120
or120x400
? Does it scale up the image, is that image still cropped - And most importantly does having images of a different dimensions effect the accuracy of the model? Basically is this an issue that needs to be addressed?