ValueError when using ClassificationInterpretation with loaded learner/model

I’m going through the 05_pet_breeds nb and get the following error message, when I learn.fine_tune() then learn.export(), then load_learner() and learn.eval():

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[78], line 2
      1 interp = ClassificationInterpretation.from_learner(learn)
----> 2 interp.plot_confusion_matrix(figsize=(12,12), dpi=60)

File /venv/lib/python3.8/site-packages/fastai/interpret.py:130, in ClassificationInterpretation.plot_confusion_matrix(self, normalize, title, cmap, norm_dec, plot_txt, **kwargs)
    128 "Plot the confusion matrix, with `title` and using `cmap`."
    129 # This function is mainly copied from the sklearn docs
--> 130 cm = self.confusion_matrix()
    131 if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
    132 fig = plt.figure(**kwargs)

File /venv/lib/python3.8/site-packages/fastai/interpret.py:114, in ClassificationInterpretation.confusion_matrix(self)
    112 "Confusion matrix as an `np.ndarray`."
    113 x = torch.arange(0, len(self.vocab))
--> 114 _,targs,decoded = self.learn.get_preds(dl=self.dl, with_decoded=True, with_preds=True, 
    115                                        with_targs=True, act=self.act)
    116 d,t = flatten_check(decoded, targs)
    117 cm = ((d==x[:,None]) & (t==x[:,None,None])).long().sum(2)

ValueError: not enough values to unpack (expected 3, got 2)

Steps to reproduce:

! [ -e /content ] && pip install -Uqq fastbook
import fastbook
fastbook.setup_book()

from fastai.vision.all import *
from fastbook import *
path = untar_data(URLs.PETS)
pets = DataBlock(blocks = (ImageBlock, CategoryBlock),
                 get_items=get_image_files, 
                 splitter=RandomSplitter(seed=42),
                 get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'),
                 item_tfms=Resize(460),
                 batch_tfms=aug_transforms(size=224, min_scale=0.75))
dls = pets.dataloaders(path/"images")
learn = vision_learner(dls, resnet34, metrics=error_rate)
learn.remove_cb(ProgressCallback)
learn.fine_tune(2)
learn.export('pets_cuda.pkl')
learn = load_learner('pets_cuda.pkl')
learn.eval()
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)

It works fine when not using a loaded model. However, it would be good to have it work with a loaded model to avoid training.

1 Like

Not 100% since I haven’t used this functionality yet, but I would guess this is because exporting the learner doesn’t export all the data that it used for training/validation, which is exactly what is needed to create the confusion matrix.

EDIT:

I just double checked your code and this seems indeed to be the problem.

If you do:

..
learn.export('pets_cuda.pkl')
learn = load_learner('pets_cuda.pkl')
# run the next line, e.g. add back the dataloaders to the learner
learn.dls = dls 
learn.eval()
..

The confusion matrix works again

3 Likes

great, thanks!!

1 Like