Inconsistent result between ClassificationInterpretation.top_losses() and DatasetFormatter().from_toplosses()

Following Jeremy’s notebook from lesson 1 and 2 (PETS dataset), my main purpose is to find out which training files(i.e. filename) are classified incorrectly during training process. I’d like to get the loss value, the probability for each class, and the actual label for the training data. I can accomplish this in two ways:

  • ClassificationInterpretation.top_losses() , which return loss and index.
  • DatasetFormatter().from_toplosses(), which return dataset and index.

After training a model, i would create an ClassificationInterpretation object by
ClassificationInterpretation.from_learner(learn, ds_type=DatasetType.Train)

Where learn is a resnet34 model i’ve fitted with 4 epoch. What’s weird is:

  • Everytime I rerun the from_learner() method, I might not get the same number of misclassified images from the confusion matrix. Since the learner is done learning, and the dataset is the same, shouldn’t the number of misclassified images be the same every time?
  • The number of losses and indexes return from the top_losses() method doesn’t seem to match the actual number of training dataset.
  • Everytime I rerun the from_learner() method, and the top_losses() method will return a different set of index. I’m not sure how to tie it back to the original filename since the index value keep changing
  • similar behavior when I call the Learner.get_preds() method:
    preds,y,losses =learn.get_preds(with_loss=True,ds_type=DatasetType.Train)
    the index(argmax) where the maximum value of the loss value is stored, changes every time the method is called.

Can anybody help me with these?
i shared my notebook here.

This problem doesn’t seem to happen when working on validation portion of the data.

2 Likes

Hm, this might or might not be related? But when I run DatasetFormatter.from_toplosses on both the validation set and the training set, then I get the same number of indexes for both (the number or indexes for the training set).
I.e.
ds, idxs = DatasetFormatter().from_toplosses(learn, ds_type=DatasetType.Valid)
ds_t, idxs_t = DatasetFormatter().from_toplosses(learn, ds_type=DatasetType.Train)
and
len(idxs) == len(idxs_t ) == the size of the training data.
However, the order of the indexes are not the same, i.e. idxs_t[:10] != idxs[:10] .

2 Likes

it is related I believe. I mentioned this in my notebook too. :slight_smile: I also am seeing the same thing

I suspect in this line of code:
_,_,top_losses = learn.get_preds(ds_type=DatasetType.Fix, with_loss=True)
in get_toplosses_idxs() method that called by from_toplosses().
As can be seen, instead of using the user input for ds_type, the method fixes the DatasetType as Fix. Unfortunately, I didn’t find any clear doc for what is Fix, but maybe it chooses a random n_train samples from the whole dataset, where n_train is the number of samples in the train dataset. This can explain why we get n_train samples no matter what ds_type we choose and why the indices are different on different calls.

Just see this post that helps understand what is Fix and how should we use from_toplosses():

2 Likes

In case anyone comes here looking to find top losses for the training set, the correct way to do this is

interp = learn_cln.interpret(DatasetType.Fix)
ds = db.fix_ds
interp.plot_confusion_matrix()
interp.plot_top_losses(4)
ds.items[interp.top_losses(4)[1]]

If you try using the DatasetType.Train flag, you’ll notice that the results from multiple calls to learn_cln.interpret produce different/inconsistent results. This is because the train_ds shuffles the data, while the fix_ds, does not.

If you want these changes, you’ll need to grab the latest code from master until a new version is published to pypi. If you’re getting an attribute error for fix_ds, you probably don’t have the latest code.

pip install --force-reinstall git+https://github.com/fastai/fastai.git