ROC Curve - how to graph?

Is it possible to generate an ROC Curve graph from fastai ? Is this included in the library?

It isn’t, but you can use the functionality from sklearn to achieve this.

This has the information needed. All you have to do is output predictions to numpy array (along with the targets) and feed it to the sklearn functions.

I have read through the link. This is how I would graph it, but I’m not sure how to get the data for it out of fastai. What would I need out of the Dogs Versus Cats to do this?

From the documentation:

y_score =, y_train).decision_function(X_test)

# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])

# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])

is y_true a list then? Is this y in Lesson 1?

My reasoning is comparing sklearn.metrics.roc_curve(y_true, y_score, pos_label=None, sample_weight=None, drop_intermediate=True) and what is used in the lesson, sklearn.metrics.confusion_matrix(y_true, y_pred, labels=None, sample_weight=None).

However, I’m not sure where to get y_score from?

If someone else happens to find themselves looking for the answer, this is how I got my y_score and y_test for a binary classifier:

valid_set = df.iloc[valid_idx, :]
preds = learn.get_preds()
y_score = preds[0][:,1].numpy()
y_test = valid_set.label.values
1 Like

Hi @daleevans could you please provide a minimum running example?

I have trouble to get valid_idx