Bug with ClassificationInterpretation

Suppose you want to see the samples with the highest error in the Training set. You can call like this:

interp = ClassificationInterpretation.from_learner(learn,ds_type=DatasetType.Train)

However,
interp.plot_top_losses(9, figsize=(15,11))

assumes the Validation set (hard coded) and throws an index error.

ClassificationInterpretation.from_learner needs to remember the dataset you called it on and apply its interpretation methods to that same dataset.

Thanks for your good work on the new fastai library!

1 Like

Good point! Should be fixed in master now.

1 Like

@sgugger Hi, I’m getting another error with ClassificationInterpretation. Attempting to do interp = ClassificationInterpretation.from_learner(learn) returns:

ValueError: (‘The argument is not a tensor’, "(tensor([[-3.5561, -0.5064, -0.8093, …, -0.1481, 3.7251, 8.7142],\n [-2.8288, -0.6909, -4.4075, …, 6.8117, -0.7254, 0.2404],\n [ 4.2408, -0.3055, 7.2959, …, -0.3430, -2.1235, -3.4119],\n . . .

Happen to know what might be causing this? I’m using resnet34 with the following metrics/callbacks:

f_0_5 = FBeta(beta=0.5) # weight precision higher
f_0_5.name = 'f_0_5'
metrics=[f_0_5, accuracy, error_rate], 
callbacks=[TerminateOnNaNCallback()], 
callback_fns=[PeakMemMetric, partial(EarlyStoppingCallback, 
                                                     monitor='accuracy', 
                                                     min_delta=0.005, patience=3)]

Edit: Looks like reseting callbacks/callback_fns to empty lists fixed it. Not sure which one was the culprit yet, maybe the TerminateOnNaNCallback