How to make these images "padded"

Hi,

Apologies in advance, this is a very basic coding question.

I have a model which classifies eye disease.

However, when images are moved into the data bunch, they are “cropped” to 224. This results in a loss of information at the lateral edges of some of the images. It’s bringing the accuracy down.

image

In which part of this code can I instruct the databunch function to not ‘crop’ the information out, but rather shrink the image down and “pad” the top and bottom letterbox style? Everything I’ve tried has thrown an error.

data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2,
ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)

Full code here

Does this happen with one image or every image? You can turn off crop in the transforms. Check the transforms in the documentation and turn off the ones you don’t need. Example: get_transforms(flip_vertical=False,…). Also load one image, check image.size and use that size for your pictures. However, if your pictures are rectangular they will be cropped in a square shape. Padding is also part of the transforms. Hope this helps.

I had a similar problem with another dataset, interestingly also eye-related!

You can check the ResizeMethod parameter, which you can pass in if you are using the data_block API.
Check it out here: https://docs.fast.ai/vision.image.html#ResizeMethod

1 Like

partially solved

np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2,
ds_tfms=get_transforms(), size=224, resize_method=ResizeMethod.PAD, num_workers=4,padding_mode = ‘zeros’).normalize(imagenet_stats)

produced

image

I’m still not entirely sure why some images are being rotated. I also perceive that there is a little warping going on.

thanks @ilovescience

They’re rotated because you’re using default transform arguments on get_transforms

https://docs.fast.ai/vision.transform.html#get_transforms