TypeError: len() of a 0-d tensor on dataset with bboxes

Hi all,

I’m trying to use the COCO text dataset, which has bbox annotations for all text parts on images. It looks like I correctly got it into a fastai databunch (see images below and attached notebook) but sometimes it gives underneath error. Any suggestions or guidance on how to debug this error?

notebook.pdf (1.5 MB)


TypeError Traceback (most recent call last)
in
----> 1 data.show_batch(1)

/opt/anaconda3/lib/python3.6/site-packages/fastai/basic_data.py in show_batch(self, rows, ds_type, **kwargs)
153 def show_batch(self, rows:int=5, ds_type:DatasetType=DatasetType.Train, **kwargs)->None:
154 “Show a batch of data in ds_type on a few rows.”
–> 155 x,y = self.one_batch(ds_type, True, True)
156 if self.train_ds.x._square_show: rows = rows ** 2
157 xs = [self.train_ds.x.reconstruct(grab_idx(x, i, self._batch_first)) for i in range(rows)]

/opt/anaconda3/lib/python3.6/site-packages/fastai/basic_data.py in one_batch(self, ds_type, detach, denorm)
134 w = self.num_workers
135 self.num_workers = 0
–> 136 try: x,y = next(iter(dl))
137 finally: self.num_workers = w
138 if detach: x,y = to_detach(x),to_detach(y)

/opt/anaconda3/lib/python3.6/site-packages/fastai/basic_data.py in iter(self)
67 “Process and returns items from DataLoader.”
68 assert not self.skip_size1 or self.batch_size > 1, “Batch size cannot be one if skip_size1 is set to True”
—> 69 for b in self.dl:
70 y = b[1][0] if is_listy(b[1]) else b[1]
71 if not self.skip_size1 or y.size(0) != 1: yield self.proc_batch(b)

/opt/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py in next(self)
635 self.reorder_dict[idx] = batch
636 continue
–> 637 return self._process_next_batch(batch)
638
639 next = next # Python 2 compatibility

/opt/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch)
656 self._put_indices()
657 if isinstance(batch, ExceptionWrapper):
–> 658 raise batch.exc_type(batch.exc_msg)
659 return batch
660

TypeError: Traceback (most recent call last):
File “/opt/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 138, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File “/opt/anaconda3/lib/python3.6/site-packages/fastai/vision/data.py”, line 42, in bb_pad_collate
max_len = max([len(s[1].data[1]) for s in samples])
File “/opt/anaconda3/lib/python3.6/site-packages/fastai/vision/data.py”, line 42, in
max_len = max([len(s[1].data[1]) for s in samples])
File “/opt/anaconda3/lib/python3.6/site-packages/torch/tensor.py”, line 404, in len
raise TypeError(“len() of a 0-d tensor”)
TypeError: len() of a 0-d tensor

length of returned tensor seems to be insufficient…
i get same error when i try to get top_losses through interep object
interp = ClassificationInterpretation(data, preds, y, losses)

losses is mean value of all losses rather then it should be batch wise

may be some one from fai team can comment why losses are single value rather than tensor batch