RuntimeError: stack expects each tensor to be equal size, but got [3, 828, 1080] at entry 0 and [3, 186, 271] at entry 1

I am trying to do image classification with Indian food Dataset
https://www.kaggle.com/cdart99/food20dataset/ which consists of both train and test folders seperated
I created a DataBlock

foods = DataBlock(blocks=(ImageBlock,CategoryBlock),
get_items = get_image_files,
get_y = parent_label,
splitter = GrandparentSplitter(train_name=‘train_set’,valid_name=‘test_set’))
food_dsets = foods.datasets(path)
food_dsets.train[0]
dls = foods.dataloaders(path,batch_size=5)

But when I call a
dls.show_batch(),
It throws a RuntimeError: stack expects each tensor to be equal size, but got [3, 828, 1080] at entry 0 and [3, 186, 271] at entry 1

**RuntimeError: stack expects each tensor to be equal size, but got [3, 828, 1080] at entry 0 and [3, 186, 271] at entry 1**![err|690x234](upload://kUKYnS95BGZVOnLvyTDoVS9xipa.png) 

** Traceback (most recent call last)**
in ()
** 1 dls = foods.dataloaders(path,batch_size=5)**
----> 2 dls.show_batch()

13 frames
/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py in default_collate(batch)
** 53 storage = elem.storage().new_shared(numel)**
** 54 out = elem.new(storage)**
—> 55 return torch.stack(batch, 0, out=out)
** 56 elif elem_type.module == ‘numpy’ and elem_type.name != 'str
’ **
** 57 and elem_type.name != ‘string_’:**

RuntimeError: stack expects each tensor to be equal size, but got [3, 828, 1080] at entry 0 and [3, 186, 271] at entry 1

The model is trained on batches of images. All Images in a batch have to be of the same size. So just add a resize transform to your DataBlock (e.g. item_tfms=Resize(224)).

foods = DataBlock(blocks=(ImageBlock,CategoryBlock),
get_items = get_image_files,
get_y = parent_label,
item_tfms=Resize(224),
splitter = GrandparentSplitter(train_name=‘train_set’,valid_name=‘test_set’))
food_dsets = foods.datasets(path)
food_dsets.train[0]
dls = foods.dataloaders(path,batch_size=5)
2 Likes

Thanks a lot, it works!!

Thanks, works for me as well

It works for me as well: thanks for suggestion!