Interp.losses, interp.preds

Hi @sgugger
Given interp = ClassificationInterpretation.from_learner(learn)
interp.losses (size of num_validation x 1)
interp.preds (size of num_validations x num_classes)

How is interp.losses related to interp.preds? Perharps, point me to where this is described in the code?

I figured it out. Preds are output of the model. Losses are the default cross entropy loss summed over all predictions per image. Only issue though is where is the default loss defined?

It’s interpreted by how your ‘y’s are. If it’s classification it’s FlattenedCrossEntropyLoss, else it is MSELossFlat. You can see this in cnn_learner’s code

Yes thanks. I’m working on lecture 1, so I figured it was a cross entropy. My issue is I want to modify the plot_top_losses() to also give me the results, even as text, of the best 3 predictions for the worst losses. Do you have any ideas on how to do it? Right now, I have this:

    #get raw results
raw_preds = interp.preds #num_validations x num_classes, output of network
raw_losses = interp.losses #single losses per image, so num_validation x 1, 

#get the N bottom predictions per image. For use with worst losses case
topN = 9;
raw_preds_sorted = raw_preds.argsort(dim=1)
pred_class_bottomN = raw_preds_sorted[:,:topN]

#sort losses to figure out which images are giving the worst loss calculations
losses, idx = interp.losses.topk(topN, largest = True)

Here is where I’m stuck. If I plot_top_losses, the results don’t add up. That is: the classes given above the images when I use

interp.plot_top_losses(9, largest=False, figsize=(15,11)), don’t match the classes from:

print(pred_class_bottomN[idx,0]) #single class prediction for worst loss per image
print(dbunch.vocab[pred_class_bottomN[idx,0]]) #Actual names of the class for the worst loss

Any ideas?