RocAuc score

How to use roc aoc score in vision learner? i used as per the docs, but it gives me an error. But error rate or accuracy works just fine. I used the one for pets data, RocAucBinary. I then also use just RocAuc, didn’t work either.

After training, calculate the predicted probabilities for your validation set using the get_preds method:
preds, _ = learn.get_preds()
then Calculate the ROC AUC score using the roc_auc_score function from scikit-learn:
from sklearn.metrics import roc_auc_score

target_labels = learn.dls.valid_ds.items
roc_auc = roc_auc_score(target_labels, preds, multi_class='ovr')
print("ROC AUC Score:", roc_auc)

In the code above, target_labels represents the ground truth labels for your validation set. preds contains the predicted probabilities generated by your trained model.

Keep in mind that the ROC AUC score is typically used for binary classification problems. In multi-class scenarios, the multi_class parameter is set to ‘ovr’ (one-vs-rest) in the roc_auc_score function, which calculates the ROC AUC score for each class against the rest.

If you want to evaluate binary classification performance for a specific class, you can modify the code accordingly. However, keep in mind that the ROC AUC score may not be the most appropriate metric for multi-class image classification tasks. Other metrics like accuracy, top-k accuracy, or F1 score may be more suitable in these cases.

2 Likes

Thanks :+1:

1 Like