I am using the v1 installed from github and I am trying to learn a CNN from scratch using only left/right transform. To load the dataset, I use the ImageDataBunch and I want to resize all my images in train and valid set to 128. The code is shown below:
Does the above mean that the image will be cropped first and then resized ?
Edit: The issue was exactly that the pytorch version had not changed. However, now I get the following error on running data.normalize:
*** RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 154 and 134 in dimension 2 at /opt/conda/conda-bld/pytorch-nightly_1540802486426/work/aten/src/TH/generic/THTensorMoreMath.cpp:1317
I have identified the issue where the training dataset resizes to the number mentioned in size parameter but the data in valid portion remains the same. This also happens if you specify the valid_pct number.
Could you kindly let me know if this is intended behavior ? @sgugger
You pass a tuple of two list of transforms during the creation of your DataBunch, the first elements is for your training set, the second for your validation dataset. If your validation images aren’t touched, it’s probably because you didn’t specify a correct transform in the second element of this tuple.
I defined tfms as tfms = [flip_lr(p=0.5), flip_lr(p=0.5)] since it was always looking for tfms in the code. I have also tried to pass do_crop=False as an extra parameter to ImageDataBunch but the only thing that works for me is changing the above code to