My notebook is here:
The specific error is here:
RuntimeError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/lib/python3.6/dist-packages/fastai/torch_core.py", line 99, in data_collate
return torch.utils.data.dataloader.default_collate(to_data(batch))
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 232, in default_collate
return [default_collate(samples) for samples in transposed]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 232, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 209, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 700 and 332 in dimension 2 at /pytorch/aten/src/TH/generic/THTensorMoreMath.cpp:1333
Caused by this code block:
tfms = get_transforms(max_rotate=20, max_zoom=1.3, max_lighting=0.4, max_warp=0.4, p_affine=1., p_lighting=1.)
data = ImageDataBunch.from_csv(path=BASE, folder=f'train', csv_labels="train.csv", ds_tfms=tfms, sz=sz, bs=bs, xtra_tfms=[rand_resize_crop(sz)])
I believe this is due to the images being different sizes, but I thought that get_transforms would take care of that. I have also tried explicitly adding extra_tfms to crop.
Clearly I am missing something, any help is appreciated.