Like I wrote in https://forums.fast.ai/t/how-to-use-interpretation-api-in-case-of-multilabeled-dataset/34706, the Interpretation API does not (seemingly) work if the dataset is multilabeled.
You can still get the top losses, but you cannot compute the confusion matrix nor plot it or plot the top losses above.
The key issue is associating the top losses with the data in
valid_ds, and this gets broken as one uses
In my opinion, this is a serious issue, since in single-labeled contexts you can take the error rate as an absolute result, so visualizing the misclassified samples is still important but not vital.
Conversely, in a multi-labeled context, one has to establish a threshold, so a metrics like
accuracy_thresh has to be taken with a grain of salt, and visualizing misclassified examples becomes vital.
Moreover, I’d like to point out an interesting observation by Jeremy hisself during lesson 3 (planet, e.g. a multilabeled dataset), about fine-tuning:
Now in this case, I’m fitting with my original dataset. But you could create a new data bunch with just the misclassified instances and go ahead and fit. The misclassified ones are likely to be particularly interesting.
It’s hard to find what you model has incorrectly classified when one cannot use the Interpretation api.
That said, I would gladly (try and) implement a version of the Interpretation API for multi-labeled data, but please give me a bit of guidance to start me up.