Let’s say I have trained an image recogniser to classify the content of an image as either a dog, a cat, a rat or an elephant. I put my model into production and it is for some reason fed with images of humans. The probabilities from the model suggests that there is 70% probability that an image of a human is an image of an elephant. I would rather like the model to say the input is not like any of the classes it was trained on.
What is the go-to-way of handling this problem? I don’t have resources to train my model on data from all the possible classes that could show up, and I also don’t care about detecting anything else than what I explicitly trained my model on.
I’m pretty sure Jeremy has talked about this in one of the lectures, but I can’t seem to find it.