Not all test examples get a prediction with model.py predict()

I have my own pytorch model that I’m

preds = predict(m, md.test_dl)
len(preds), len(test_df)

returns:

(226992, 226998)

So I get 6 less predictions … which coincidentally is the remainder after dividing 226998 by the batch size (12 in this case).

What is going on and how do I rectify this?

Thanks!

Could this have any similarity to a problem im currently having as well?

Continuing the discussion from Kaggle Comp: Plant Seedlings Classification:

I haven’t used the fastai library myself but the PyTorch DataLoader that is used by fastai has an option to drop_last. If this is true, it does not include the final mini-batch if that is not a full batch. That seems to be what is happening here. Perhaps there is an option to turn this off in fastai?

1 Like