Error creating DataLoader for multi-Block DataBlock

I am trying to create a DataBlock with ImageBlock as input Block and (MultiCategory and BBoxBlock) as output Blocks.

pne = DataBlock(blocks=(ImageBlock, MultiCategoryBlock, BBoxBlock),

                      splitter=RandomSplitter(valid_pct=0.2),

                      get_x=ColReader(0, pref=path/'stage_2_train_images', suff='.png'),

                      get_y=[ColReader(1, label_delim=';'), ColReader(3)],

                      item_tfms=Resize(128),

                      batch_tfms=aug_transforms(),

                      n_inp=1)

This is how my DataSet looks like

ds = pne.datasets(df_final)
ds.train[0]

(PILImage mode=RGB size=1024x1024,
 TensorMultiCategory([1., 0., 0.]),
 TensorBBox([[541., 120., 719., 267.]]))

But when I do

dls = pne.dataloaders(df_final)

I run into this error
Could not do one pass in your dataloader, there is something wrong in it

I am not sure how to troubleshoot this issue, any help would be much appreciated.

2 Likes

You need to use getters:

For example like follows:

getters = [lambda o: path/'train'/o, lambda o: img2bbox[o][0], lambda o: img2bbox[o][1]]

pascal = DataBlock(blocks=(ImageBlock, BBoxBlock, BBoxLblBlock),
                 splitter=RandomSplitter(),
                 get_items=get_train_imgs, 
                 getters=getters,
                 item_tfms=item_tfms,
                 batch_tfms=batch_tfms,
                 n_inp=1)

Here you have an example.

Hi,
Thanks for the above suggestion, but I don’t think the problem was with the getters itself. I did a bit of tweaking and now my current code looks like this

box_block = DataBlock(blocks=(ImageBlock, BBoxBlock, BBoxLblBlock), 
             splitter=RandomSplitter(),
             get_x=ColReader(0, pref=path/'stage_2_train_images', suff='.png'),
             get_y=[ColReader(5), ColReader(4, label_delim=';')],
             item_tfms=item_tfms,
             batch_tfms=batch_tfms,
             n_inp=1)

But when I do box_block.summary(df_final), I hit an error which seems like it has something to do with the bb_pad done on the BBoxes. The .summary() output along with the error looks like this

Setting-up type transforms pipelines

Collecting items from PatientID … BBoxes
0 0004cfab-14fd-4e49-80ba-63a80b6bddd6 … [[0.0, 0.0, 0.0, 0.0]]
1 00313ee0-9eaa-42f4-b0ab-c148ed3241cd … [[0.0, 0.0, 0.0, 0.0]]
2 00322d4d-1c29-4943-afc9-b6754be640eb … [[0.0, 0.0, 0.0, 0.0]]
3 003d8fa0-6bf1-40ed-b54c-ac657f8495c5 … [[0.0, 0.0, 0.0, 0.0]]
4 00436515-870c-4b36-a041-de91049b9ab4 … [[152.0, 264.0, 531.0, 477.0], [152.0, 562.0, 605.0, 818.0]]
… … … …
26679 c1e73a4e-7afe-4ec5-8af6-ce8315d7a2f2 … [[418.0, 666.0, 641.0, 852.0], [504.0, 316.0, 777.0, 495.0]]
26680 c1ec14ff-f6d7-4b38-b0cb-fe07041cbdc8 … [[464.0, 609.0, 748.0, 849.0], [298.0, 185.0, 677.0, 413.0]]
26681 c1edf42b-5958-47ff-a1e7-4f23d99583ba … [[0.0, 0.0, 0.0, 0.0]]
26682 c1f6b555-2eb1-4231-98f6-50a963976431 … [[0.0, 0.0, 0.0, 0.0]]
26683 c1f7889a-9ea9-4acb-b64c-b737c929599a … [[393.0, 570.0, 738.0, 831.0], [424.0, 233.0, 780.0, 434.0]]

[26684 rows x 6 columns]
Found 26684 items
2 datasets of sizes 21348,5336
Setting up Pipeline: ColReader -> PILBase.create
Setting up Pipeline: ColReader -> TensorBBox.create
Setting up Pipeline: ColReader -> MultiCategorize

Building one sample
Pipeline: ColReader -> PILBase.create
starting from
PatientID aeeacc25-019f-4e78-b9e5-829c5d916fc2
dicom …/input/rsna/stage_2_train_images/aeeacc25-019f-4e78-b9e5-829c5d916fc2.dcm
png …/input/rsna/stage_2_train_images/aeeacc25-019f-4e78-b9e5-829c5d916fc2.png
Target 0
Classification No Lung Opacity / Not Normal
BBoxes [[0.0, 0.0, 0.0, 0.0]]
Name: 16440, dtype: object
applying ColReader gives
…/input/rsna/stage_2_train_images/aeeacc25-019f-4e78-b9e5-829c5d916fc2.png
applying PILBase.create gives
PILImage mode=RGB size=1024x1024
Pipeline: ColReader -> TensorBBox.create
starting from
PatientID aeeacc25-019f-4e78-b9e5-829c5d916fc2
dicom …/input/rsna/stage_2_train_images/aeeacc25-019f-4e78-b9e5-829c5d916fc2.dcm
png …/input/rsna/stage_2_train_images/aeeacc25-019f-4e78-b9e5-829c5d916fc2.png
Target 0
Classification No Lung Opacity / Not Normal
BBoxes [[0.0, 0.0, 0.0, 0.0]]
Name: 16440, dtype: object
applying ColReader gives
[[0. 0. 0. 0.]]
applying TensorBBox.create gives
TensorBBox of size 1x4
Pipeline: ColReader -> MultiCategorize
starting from
PatientID aeeacc25-019f-4e78-b9e5-829c5d916fc2
dicom …/input/rsna/stage_2_train_images/aeeacc25-019f-4e78-b9e5-829c5d916fc2.dcm
png …/input/rsna/stage_2_train_images/aeeacc25-019f-4e78-b9e5-829c5d916fc2.png
Target 0
Classification No Lung Opacity / Not Normal
BBoxes [[0.0, 0.0, 0.0, 0.0]]
Name: 16440, dtype: object
applying ColReader gives
[No Lung Opacity / Not Normal]
applying MultiCategorize gives
TensorMultiCategory([2])

Final sample: (PILImage mode=RGB size=1024x1024, TensorBBox([[0., 0., 0., 0.]]), TensorMultiCategory([2]))

Setting up after_item: Pipeline: BBoxLabeler -> PointScaler -> Resize -> ToTensor
Setting up before_batch: Pipeline: bb_pad
Setting up after_batch: Pipeline: IntToFloatTensor -> AffineCoordTfm -> LightingTfm -> Normalize

Building one batch
Applying item_tfms to the first sample:
Pipeline: BBoxLabeler -> PointScaler -> Resize -> ToTensor
starting from
(PILImage mode=RGB size=1024x1024, TensorBBox of size 1x4, TensorMultiCategory([2]))
applying BBoxLabeler gives
(PILImage mode=RGB size=1024x1024, TensorBBox of size 1x4, TensorMultiCategory([2]))
applying PointScaler gives
(PILImage mode=RGB size=1024x1024, TensorBBox of size 1x4, TensorMultiCategory([2]))
applying Resize gives
(PILImage mode=RGB size=224x224, TensorBBox of size 1x4, TensorMultiCategory([2]))
applying ToTensor gives
(TensorImage of size 3x224x224, TensorBBox of size 1x4, TensorMultiCategory([2]))

Adding the next 3 samples
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:3289: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:3226: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "

Applying before_batch to the list of samples
Pipeline: bb_pad
starting from
[(TensorImage of size 3x224x224, TensorBBox of size 1x4, TensorMultiCategory([2])), (TensorImage of size 3x224x224, TensorBBox of size 1x4, TensorMultiCategory([2])), (TensorImage of size 3x224x224, TensorBBox of size 1x4, TensorMultiCategory([2])), (TensorImage of size 3x224x224, TensorBBox of size 2x4, TensorMultiCategory([1]))]
applying bb_pad failed.

IndexError Traceback (most recent call last)
in ()
----> 1 box_block.summary(df_final)

9 frames
/usr/local/lib/python3.6/dist-packages/fastai2/data/block.py in summary(self, source, bs, show_batch, **kwargs)
171 if len([f for f in dls.train.before_batch.fs if f.name != ‘noop’])!=0:
172 print("\nApplying before_batch to the list of samples")
–> 173 s = _apply_pipeline(dls.train.before_batch, s)
174 else: print("\nNo before_batch transform to apply")
175

/usr/local/lib/python3.6/dist-packages/fastai2/data/block.py in _apply_pipeline(p, x)
131 except Exception as e:
132 print(f" applying {name} failed.")
–> 133 raise e
134 return x
135

/usr/local/lib/python3.6/dist-packages/fastai2/data/block.py in _apply_pipeline(p, x)
127 name = f.name
128 try:
–> 129 x = f(x)
130 if name != “noop”: print(f" applying {name} gives\n {_short_repr(x)}")
131 except Exception as e:

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in call(self, x, **kwargs)
70 @property
71 def name(self): return getattr(self, ‘_name’, _get_name(self))
—> 72 def call(self, x, **kwargs): return self._call(‘encodes’, x, **kwargs)
73 def decode (self, x, **kwargs): return self._call(‘decodes’, x, **kwargs)
74 def repr(self): return f’{self.name}: {self.encodes} {self.decodes}’

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in _call(self, fn, x, split_idx, **kwargs)
80 def _call(self, fn, x, split_idx=None, **kwargs):
81 if split_idx!=self.split_idx and self.split_idx is not None: return x
—> 82 return self._do_call(getattr(self, fn), x, **kwargs)
83
84 def _do_call(self, f, x, **kwargs):

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in _do_call(self, f, x, **kwargs)
84 def _do_call(self, f, x, **kwargs):
85 if not is_tuple(x):
—> 86 return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))
87 res = tuple(self.do_call(f, x, **kwargs) for x
in x)
88 return retain_type(res, x)

/usr/local/lib/python3.6/dist-packages/fastcore/dispatch.py in call(self, *args, **kwargs)
96 if not f: return args[0]
97 if self.inst is not None: f = MethodType(f, self.inst)
—> 98 return f(*args, **kwargs)
99
100 def get(self, inst, owner):

/usr/local/lib/python3.6/dist-packages/fastai2/vision/data.py in bb_pad(samples, pad_idx)
31 def bb_pad(samples, pad_idx=0):
32 “Function that collect samples of labelled bboxes and adds padding with pad_idx.”
—> 33 samples = [(s[0], *clip_remove_empty(*s[1:])) for s in samples]
34 max_len = max([len(s[2]) for s in samples])
35 def _f(img,bbox,lbl):

/usr/local/lib/python3.6/dist-packages/fastai2/vision/data.py in (.0)
31 def bb_pad(samples, pad_idx=0):
32 “Function that collect samples of labelled bboxes and adds padding with pad_idx.”
—> 33 samples = [(s[0], *clip_remove_empty(*s[1:])) for s in samples]
34 max_len = max([len(s[2]) for s in samples])
35 def _f(img,bbox,lbl):

/usr/local/lib/python3.6/dist-packages/fastai2/vision/data.py in clip_remove_empty(bbox, label)
26 bbox = torch.clamp(bbox, -1, 1)
27 empty = ((bbox[…,2] - bbox[…,0])*(bbox[…,3] - bbox[…,1]) < 0.)
—> 28 return (bbox[~empty], label[~empty])
29
30 # Cell

IndexError: The shape of the mask [2] at index 0 does not match the shape of the indexed tensor [1] at index 0

I am not sure how to work around this padding issue now. Thanks in advance for any support from anyone, since I am pretty new to this and have been stuck with this (and similar issues for days now.)

Your bounding boxes are often empty: I see a lot of [0,0,0,0]. I’m not sure the fastai library likes that very much.

Ya, there is an inherent skew in the dataset, just 30% images have any bounding boxes (but these ones that have bounding boxes may even have multiple).
Do you think this imbalance may have anything to do with the padding issue?

So I changed the approach from using the DataBlock to using the ImageDataLoaders instead -
This is what I am trying

bbox_dl = ImageDataLoaders.from_df(df=df_final,
                               path = path,
                               folder='stage_2_train_images',
                               suff='.png',
                               seed=10, 
                               fn_col=0, 
                               label_col= 5,
                               y_block = RegressionBlock,
                               collate_fn = bb_pad_collate,
                               num_workers = 0,
                               item_tfms=item_tfms, 
                               batch_tfms=batch_tfms, 
                               bs=bs) 

(Although I am not sure I can use “collate_fn = bb_pad_collate” as an argument for ImageDataLoaders, my only intention was to get rid of the padding issue). The ImageDataLoaders is created fine, but when I run the show.batch(), I get the following error

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-64-ebf159a0279b> in <module>()
----> 1 bbox_dl.show_batch()

13 frames
/usr/local/lib/python3.6/dist-packages/fastai2/data/core.py in show_batch(self, b, max_n, ctxs, show, unique, **kwargs)
     95             old_get_idxs = self.get_idxs
     96             self.get_idxs = lambda: Inf.zeros
---> 97         if b is None: b = self.one_batch()
     98         if not show: return self._pre_show_batch(b, max_n=max_n)
     99         show_batch(*self._pre_show_batch(b, max_n=max_n), ctxs=ctxs, max_n=max_n, **kwargs)

/usr/local/lib/python3.6/dist-packages/fastai2/data/load.py in one_batch(self)
    130     def one_batch(self):
    131         if self.n is not None and len(self)==0: raise ValueError(f'This DataLoader does not contain any batches')
--> 132         with self.fake_l.no_multiproc(): res = first(self)
    133         if hasattr(self, 'it'): delattr(self, 'it')
    134         return res

/usr/local/lib/python3.6/dist-packages/fastcore/utils.py in first(x)
    182 def first(x):
    183     "First element of `x`, or None if missing"
--> 184     try: return next(iter(x))
    185     except StopIteration: return None
    186 

/usr/local/lib/python3.6/dist-packages/fastai2/data/load.py in __iter__(self)
     96         self.randomize()
     97         self.before_iter()
---> 98         for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
     99             if self.device is not None: b = to_device(b, self.device)
    100             yield self.after_batch(b)

/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self)
    343 
    344     def __next__(self):
--> 345         data = self._next_data()
    346         self._num_yielded += 1
    347         if self._dataset_kind == _DatasetKind.Iterable and \

/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
    383     def _next_data(self):
    384         index = self._next_index()  # may raise StopIteration
--> 385         data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
    386         if self._pin_memory:
    387             data = _utils.pin_memory.pin_memory(data)

/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
     32                 raise StopIteration
     33         else:
---> 34             data = next(self.dataset_iter)
     35         return self.collate_fn(data)
     36 

/usr/local/lib/python3.6/dist-packages/fastai2/data/load.py in create_batches(self, samps)
    105         self.it = iter(self.dataset) if self.dataset is not None else None
    106         res = filter(lambda o:o is not None, map(self.do_item, samps))
--> 107         yield from map(self.do_batch, self.chunkify(res))
    108 
    109     def new(self, dataset=None, cls=None, **kwargs):

/usr/local/lib/python3.6/dist-packages/fastai2/data/load.py in do_batch(self, b)
    126     def create_item(self, s):  return next(self.it) if s is None else self.dataset[s]
    127     def create_batch(self, b): return (fa_collate,fa_convert)[self.prebatched](b)
--> 128     def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)
    129     def to(self, device): self.device = device
    130     def one_batch(self):

/usr/local/lib/python3.6/dist-packages/fastai2/data/load.py in create_batch(self, b)
    125     def retain(self, res, b):  return retain_types(res, b[0] if is_listy(b) else b)
    126     def create_item(self, s):  return next(self.it) if s is None else self.dataset[s]
--> 127     def create_batch(self, b): return (fa_collate,fa_convert)[self.prebatched](b)
    128     def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)
    129     def to(self, device): self.device = device

/usr/local/lib/python3.6/dist-packages/fastai2/data/load.py in fa_collate(t)
     44     b = t[0]
     45     return (default_collate(t) if isinstance(b, _collate_types)
---> 46             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     47             else default_collate(t))
     48 

/usr/local/lib/python3.6/dist-packages/fastai2/data/load.py in <listcomp>(.0)
     44     b = t[0]
     45     return (default_collate(t) if isinstance(b, _collate_types)
---> 46             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     47             else default_collate(t))
     48 

/usr/local/lib/python3.6/dist-packages/fastai2/data/load.py in fa_collate(t)
     43 def fa_collate(t):
     44     b = t[0]
---> 45     return (default_collate(t) if isinstance(b, _collate_types)
     46             else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
     47             else default_collate(t))

/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/collate.py in default_collate(batch)
     53             storage = elem.storage()._new_shared(numel)
     54             out = elem.new(storage)
---> 55         return torch.stack(batch, 0, out=out)
     56     elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
     57             and elem_type.__name__ != 'string_':

RuntimeError: stack expects each tensor to be equal size, but got [2, 4] at entry 0 and [1, 4] at entry 1

Now this is how my BBoxes are stored in the dataframe -
eg. : [[152.0, 264.0, 531.0, 477.0], [152.0, 562.0, 605.0, 818.0]] for images having multiple a bounding boxes.
There are also images that have just one BBox and there are images that dont have any bounding boxes, in which case I have it a [[0.0, 0.0, 0.0, 0.0]].

Is this format the reason why I am getting this above error? If yes what do I need to change it to and how?