Teddy Bear Detector: How to know if an image is not a bear?

One of the issues I had when I put my model into production (a simple web app) was the handling of images that are not in the training/validation set which are very different from the images the model was trained on.

I accidentally did this when I mixed my Teddy Bear Detector and fed it images from the pet breed
database and cats were being classified as teddy bears :grinning:.

So I tried to fix this by putting thresholds on the probability (e.g. if probability of the predicted class
was less than 85%, give up and say “I don’t know”.

I was wondering if there are any well-known approaches on handling this issue of how confident we are of the model’s prediction and deciding when to say “I’m not sure” or “I don’t know”.

I would love to get some pointers to deeper studies regarding this issue.

This seems like some research relating to that.

2 Likes

Thanks @Hadus!

I also found this article Making Your Neural Network Say “I Don’t Know” — Bayesian NNs using Pyro and PyTorch.

I hope I can apply these techniques to make my models behave better in the real world.

2 Likes