ImageDataBunch

When creating an image data bunch like so ImageDataBunch.from_folder(image_path, train='.', valid_pct=0.2, seed=66, ds_tfms=get_transforms(), size=224).normalize(imagenet_stats) there is a size parameter.

I assume as I can see it when reading the docs that this resizes and crops images to a 224x224 image.

Using this assumption I guess it centre crops the image.

My question is 3 fold:

  1. Are the two assumptions above correct?
  2. What happens when it encounters an image lower in dimensions such as 120x120 or 120x400? Does it scale up the image, is that image still cropped
  3. And most importantly does having images of a different dimensions effect the accuracy of the model? Basically is this an issue that needs to be addressed?

The first two are correct.

If we look there is a padding method that is used, which defaults to reflection

In general, the higher you go the better accuracy you can get, at the cost of higher computation power. We do 224 because it is a multiple of 16 (or 8) so it can work more efficiently on our devices :slight_smile: