When I use
learn.get_preds(dl=dls)
If the number of items in the dataloaders is not exactly divisible by the batch size, I get an error. For example, if there are 101 items in the dataloaders, and batch size is 8, when it gets to the last batch, I get:
IndexError Traceback (most recent call last)
in
----> 1 preds = learn.get_preds(dl=dls)~/anaconda3/envs/fastai2/lib/python3.8/site-packages/fastai/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, **kwargs)
240 res[pred_i] = act(res[pred_i])
241 if with_decoded: res.insert(pred_i+2, getattr(self.loss_func, ‘decodes’, noop)(res[pred_i]))
–> 242 if reorder and hasattr(dl, ‘get_idxs’): res = nested_reorder(res, tensor(idxs).argsort())
243 return tuple(res)
244 self._end_cleanup()~/anaconda3/envs/fastai2/lib/python3.8/site-packages/fastai/torch_core.py in nested_reorder(t, idxs)
651 “Reorder all tensors int
usingidxs
"
652 if isinstance(t, (Tensor,L)): return t[idxs]
–> 653 elif is_listy(t): return type(t)(nested_reorder(t_, idxs) for t_ in t)
654 if t is None: return t
655 raise TypeError(f"Expected tensor, tuple, list or L but got {type(t)}”)~/anaconda3/envs/fastai2/lib/python3.8/site-packages/fastai/torch_core.py in (.0)
651 “Reorder all tensors int
usingidxs
"
652 if isinstance(t, (Tensor,L)): return t[idxs]
–> 653 elif is_listy(t): return type(t)(nested_reorder(t_, idxs) for t_ in t)
654 if t is None: return t
655 raise TypeError(f"Expected tensor, tuple, list or L but got {type(t)}”)~/anaconda3/envs/fastai2/lib/python3.8/site-packages/fastai/torch_core.py in nested_reorder(t, idxs)
650 def nested_reorder(t, idxs):
651 "Reorder all tensors int
usingidxs
"
–> 652 if isinstance(t, (Tensor,L)): return t[idxs]
653 elif is_listy(t): return type(t)(nested_reorder(t_, idxs) for t_ in t)
654 if t is None: return tIndexError: index 98 is out of bounds for dimension 0 with size 96
I’ve found that if I make the batch size 1, it always works, but also does inference slower. Is this a known issue? Am I doing something wrong?
Thanks