How to review images for bad classifications

I built a simple classification net with the fastai v1 library. Now I’d like to see which images from the validation set are not detected correctly.

Here is (part of) my code:

tfms = get_transforms(max_rotate=5)
data = ImageDataBunch.from_folder(path, bs=64, ds_tfms=tfms, size=(130, 255), )
# ...
preds, y = learn.get_preds(DatasetType.Valid)
bad_predictions = []
for i, (pred, gt_class) in enumerate(zip(preds, y)):
    pred_probability, predicted_class = torch.topk(pred, 1)
    is_correct = (predicted_class == gt_class)
    if is_correct:
        continue
    bad_predictions.append((i, predicted_class, pred_probability))

ds_idx, predicted_class, probability = bad_predictions[0]
img, label = data.train_ds[ds_idx]
# UPDATE 2019-04-18: The line above contains the bug as sgugger
# pointed out. Fixed version:
# img, label = data.valid_ds[ds_idx]

All seems well I can see an image.

When I try prediction on that image however the net classifies the image correctly:

category, tensor, probability = learn.predict(img)

So I assume something is wrong (prediction with the same image should be reproducible). Most likely “ds_idx” does not work. As you can see I used some random rotation for data augmentation. Is that reflected in data.train_ds[ds_idx]?

How can I get the images which were badly classified?

2 Likes

You are asking the predictions on the validation set then taking the image in the training set, that might explain the discrepancy.

facepalm
Of course it does. Thank you.

1 Like