Issues viewing top losses for ULMFiT model?


(David Cato) #1

I am currently having issues viewing the top losses for an NLP problem based on ULMFiT and Fastai v1’s text module.

The code below is modified from ClassificationInterpretation.plot_top_losses in the vision module’s learner.py, where I’ve substituted plotting the image data with a print the text.

n = 5
interp = ClassificationInterpretation.from_learner(learn)
tl_val, tl_idx = interp.top_losses(n)
print('prediction / actual / loss / probability', '\n')
for i,idx in enumerate(tl_idx):
    classes = interp.data.classes
    text,cl = interp.data.valid_ds[idx]
    cl = int(cl)
    print(f'{classes[interp.pred_class[idx]]} / {classes[cl]} / {interp.losses[idx]:.2f} / {interp.probs[idx][cl]:.2f}  {text.text}')
    print('predictions:', *[(cat, np.round(np.float(t), 3)) for cat, t in zip(classes, interp.probs[idx])])
    print()

This produces the following output:

prediction / actual / loss / probability 

neutral / negative / 12.89 / 0.00  xxbos @southwestair xxmaj flight xxunk ( n xxunk d ) departs xxup @mco enroute to xxunk http : / / t.co / xxunk 4 xxunk
predictions ('negative', 0.0) ('neutral', 1.0) ('positive', 0.0)

negative / negative / 7.14 / 0.99  xxbos @americanair i can not believe how long flight xxunk xxup phi is taking . i know it 's xxup us xxmaj airways but you own it . i would really like to get home .
predictions ('negative', 0.989) ('neutral', 0.01) ('positive', 0.001)

positive / neutral / 6.95 / 0.00  xxbos @americanair could you guys follow me so i can dm y all please
predictions ('negative', 0.017) ('neutral', 0.001) ('positive', 0.982)

positive / negative / 6.84 / 0.01  xxbos @usairways what is going on with the computers ? xxmaj why is my flight grounded ? xxmaj why does your airline suck so much ? xxhtstart xxmaj xxunk xxmaj questions xxhtend
predictions ('negative', 0.006) ('neutral', 0.001) ('positive', 0.993)

positive / positive / 6.38 / 1.00  xxbos xxmaj power xxmaj xxunk xxup rt @jetblue : xxmaj our fleet 's on fleek . http : / / t.co / t 9 s 68 xxunk
predictions ('negative', 0.002) ('neutral', 0.002) ('positive', 0.996)

(This is the Twitter US Airline Sentiment Classification dataset.)

The first result looks like mislabeled data to me, however the second result is very strange. Notice that a strong negative prediction where the target was also negative resulted in the second highest loss.

I haven’t changed the default loss function, which is CrossEntropyLoss according to learn.loss_func.func.

What could be causing this? Any ideas on where to troubleshoot from here would be most appreciated!

Also, I expect that I am not the only one who would appreciate a similar ClassificationInterpretation class that is built for the text module. I would be happy to contribute to developing this if it is something others would want.


#2

Hmm…I have not gone too far with the course. However, by briefly taking a look at the fastai documentation, I found that ClassificationInterpretation class is under the vision submodule. So I’m taking that this is only meant to be used for image classification.

You can use the losses attribute from your learner’s recorder attribute. And you possibly use that information to plot the top losses in some way. Maybe taking a look at the source code on how ClassificationInterpretation's plot_top_losses() function plots its top losses could get you started.