Data Reading Error: padding_mode

I am using the v1 installed from github and I am trying to learn a CNN from scratch using only left/right transform. To load the dataset, I use the ImageDataBunch and I want to resize all my images in train and valid set to 128. The code is shown below:

bs = 32 # batch size
img_size = 128 # image size
tfms = [flip_lr(p=0.5), flip_lr(p=0.5)]
data = ImageDataBunch.from_folder(PATH,train=‘train’,valid=‘valid’,ds_tfms=tfms, size=128)

but I am getting this error on running this

padding_mode needs to be 'zeros' or 'border', but got reflection

Can someone give me any pointers on what do I need to do. I have seen examples of padding mode, but I do not want any padding in my images and just want them to be reshaped to a square shape.

You don’t have pytorch v1.

Thanks for letting me know about it. I will double check if the version of pytorch is the preview one or not.

Can you also confirm if passing the size parameter resizes the image irrespective of the aspect ratio ? In the code it is mentioned in the transform as

if size:
    crop_target = _get_crop_target(size, mult=mult)
    target = _get_resize_target(x, crop_target, do_crop=do_crop)
    x.resize(target)

Does the above mean that the image will be cropped first and then resized ?

Edit: The issue was exactly that the pytorch version had not changed. However, now I get the following error on running data.normalize:

*** RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 154 and 134 in dimension 2 at /opt/conda/conda-bld/pytorch-nightly_1540802486426/work/aten/src/TH/generic/THTensorMoreMath.cpp:1317

1 Like

I have identified the issue where the training dataset resizes to the number mentioned in size parameter but the data in valid portion remains the same. This also happens if you specify the valid_pct number.

Could you kindly let me know if this is intended behavior ? @sgugger

You pass a tuple of two list of transforms during the creation of your DataBunch, the first elements is for your training set, the second for your validation dataset. If your validation images aren’t touched, it’s probably because you didn’t specify a correct transform in the second element of this tuple.

I thought passing the size parameter inside the DataBunch should take care of the resizing of the images. Based on your reply, I guess I will have to use another transform to resize the images.

Thank you!

It should. But how did you define ‘tfms’?

I defined tfms as tfms = [flip_lr(p=0.5), flip_lr(p=0.5)] since it was always looking for tfms[1] in the code. I have also tried to pass do_crop=False as an extra parameter to ImageDataBunch but the only thing that works for me is changing the above code to

if size:
    x.resize(target)
    x.refresh()

Note that we have changed the default behavior in the last version. Now even without passing any transforms, the resize will be properly handled.

i get the same error
how it was resolved my pytorch version is 1 only…