@mrfabulous1 @gessha have either of you tried any of the methods in Recent Advances in Open Set Recognition: ASurvey or Bayesian deep learning with Fastai to solve this problem.
I have been playing with the pets data set using multi-category classification with a sigmoid loss function (MultiCategoryBlock
with BCEWithLogitsLossFlat) instead of softmax (CategoryBlock
with CrossEntropyLossFlat) as suggested by Jeremy in Lesson 9 (refered to as DOC in the paper).
I removed n classes (breeds) from the training data and then used them as the test data to see how many of them would be classified as one of the training classes. If softmax is used (without a threshold) all the test images will be incorrectly classified as one of the training classes. When a sigmoid loss function is used the results are good unless the test image (unseen class) looks (has similar features) to one of the training classes.
In this notebook I removed the beagle and pug classes from the training data. Then when I classified them using the trained model, many of the beagle images are “wrongly” classified as basset hounds. Which from a quick inspection
seems understandable, I struggle to tell the difference between the two classes.
The reason I am asking about the alternative methods is because intuitively I wouldn’t expect a CNN (or even an expert in pets) to be able to distinguish between an unseen class that so closely resembles a class that a model is trained to detect (or expert has observed through their lifetime), but I may be wrong?