Roc_auc_score for multi-label classification

(Alexander) #1

Hi!

I’m a bit stuck, can anybody explain, please, how can I calculate roc_auc_score for each class in multi-label classification problem. I’m currently working with NIH Chest X-rays dataset, but it is the same problem as multi-label classification of sattelite images from this course.
I understand that it can be calculated as roc_auc_score(y_true, y_scores), but I don’t understand, how can I get y_scores.
Thank you!

1 Like

(sergii makarevych) #2

AUC is a ranking metric for binary classification so if you have a task with multiple classes all you do is calculate AUC for each class separately. You take class X facts as 1, assume all other classes at the moment belong to class 0, you take predicted probabilities of belonging the observations to class X and calculate AUC.

0 Likes

(WG) #3

Here’s a good explanation of how it works and why it is useful: http://www.dataschool.io/roc-curves-and-auc-explained/

1 Like

(Alexander) #4

Thank you for your answers! Now I have solid understanding of this topic.

0 Likes

(Austin) #5

Is there any way to use the current fastai implementation of the AUROC metric for multi-label (not multi-class) problems?

0 Likes