Does text interpretation need to use the same dataset that was used for training?

I’m using fastai 1.0.61.
I’m trying to view the interpretation information in an evaluation environment, where I don’t necessarily have the exact training data the model was trained with available.
Also, I don’t necessarily need to split the data to training and test sets, so I could interpret more data points.
However, If I create a TextClassificationInterpretation with a larger dataset than it was trained with, I get an out of bounds exception, like this:

self._interpreter = TextClassificationInterpretation.from_learner(self._classifier, ds_type=DatasetType.Train)
File “/usr/local/lib/python3.8/site-packages/fastai/text/interpret.py”, line 47, in from_learner
return cls(learn, *learn.get_preds(ds_type=ds_type, activ=activ, with_loss=True, ordered=True))
File “/usr/local/lib/python3.8/site-packages/fastai/text/learner.py”, line 95, in get_preds
preds = [p[reverse_sampler] for p in preds]
File “/usr/local/lib/python3.8/site-packages/fastai/text/learner.py”, line 95, in
preds = [p[reverse_sampler] for p in preds]
IndexError: index 17 is out of bounds for dimension 0 with size 16

Here, the original training set was size 17 and the current one that I set manually with

self._classifier = load_learner(…)
self._classifier.data = …

is 25.

What would be the best approach here?