"IndexError: index out of range in self" when running Tabular Learner get_preds

I have a TabularLearner that I trained and saved it. I load it using load_learner function. When I need to run new predictions I run the below code:

learn = load_learner(modelPath/‘export.pkl’)
dl = learn.dls.test_dl(df)
preds, targs = learn.get_preds(dl=dl)

df => is the new dataset

This was working fine, but for one dataset I’m getting the below error:

... site-packages\fastai\learner.py in _with_events(self, f, event_type, ex, final)
    154     def _with_events(self, f, event_type, ex, final=noop):
--> 155         try:       self(f'before_{event_type}')       ;f()
    156         except ex: self(f'after_cancel_{event_type}')

... site-packages\fastai\learner.py in all_batches(self)
    160         self.n_iter = len(self.dl)
--> 161         for o in enumerate(self.dl): self.one_batch(*o)
    162 

... site-packages\fastai\learner.py in one_batch(self, i, b)
    178         self._split(b)
--> 179         self._with_events(self._do_one_batch, 'batch', CancelBatchException)
    180 

... site-packages\fastai\learner.py in _with_events(self, f, event_type, ex, final)
    154     def _with_events(self, f, event_type, ex, final=noop):
--> 155         try:       self(f'before_{event_type}')       ;f()
    156         except ex: self(f'after_cancel_{event_type}')

... site-packages\fastai\learner.py in _do_one_batch(self)
    163     def _do_one_batch(self):
--> 164         self.pred = self.model(*self.xb)
    165         self('after_pred')

... site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(

... site-packages\fastai\tabular\model.py in forward(self, x_cat, x_cont)
     47         if self.n_emb != 0:
---> 48             x = [e(x_cat[:,i]) for i,e in enumerate(self.embeds)]
     49             x = torch.cat(x, 1)

... site-packages\fastai\tabular\model.py in <listcomp>(.0)
     47         if self.n_emb != 0:
---> 48             x = [e(x_cat[:,i]) for i,e in enumerate(self.embeds)]
     49             x = torch.cat(x, 1)

... site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(

... site-packages\torch\nn\modules\sparse.py in forward(self, input)
    144     def forward(self, input: Tensor) -> Tensor:
--> 145         return F.embedding(
    146             input, self.weight, self.padding_idx, self.max_norm,

... site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   1912         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1913     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
   1914 

IndexError: index out of range in self

During handling of the above exception, another exception occurred:

I was wondering if there is a way I can tell which column or record is breaking the code.

To give more info and to present a work around that I found I’m posting again.

The problem was happening when running the embedding layer on a new dataset that had a column with more categories than the dataloader had. The column in this pandas dataframe is a categorical type. When I convert the column to str using:
df[col_name] = df[col_name].astype(str)
then the model runs fine.

Should I need to change all categorical columns to string before running on a model that was previously trained?

1 Like