I’m a bit stuck, can anybody explain, please, how can I calculate roc_auc_score for each class in multi-label classification problem. I’m currently working with NIH Chest X-rays dataset, but it is the same problem as multi-label classification of sattelite images from this course.
I understand that it can be calculated as roc_auc_score(y_true, y_scores), but I don’t understand, how can I get y_scores.
AUC is a ranking metric for binary classification so if you have a task with multiple classes all you do is calculate AUC for each class separately. You take class X facts as 1, assume all other classes at the moment belong to class 0, you take predicted probabilities of belonging the observations to class X and calculate AUC.
Here’s a good explanation of how it works and why it is useful: http://www.dataschool.io/roc-curves-and-auc-explained/
Thank you for your answers! Now I have solid understanding of this topic.
Is there any way to use the current fastai implementation of the AUROC metric for multi-label (not multi-class) problems?