NULL Class for Outliers?


I’ve got a question that borders on the philosophical and practical.

Imagine you have a CNN whose job it is to perform a simple binary classification:
A) Airplanes
B) Babies
It is well trained on a multitude of training data with many varied examples of each, and performs very well on Test datasets that have an airplane or baby in the picture.

Now, I wish to add a third class:
C) Not Airplane nor Baby
This third class should be a catchall for all inputs that are not confidently class A or B. Inference on all pictures that don’t have either airplanes or babies in them should be labeled class C. There is no training data for class C, but rather its likelihood is solely based on the unlikelihood that an object belongs to the other classes.

How would you make this NULL class? Is this easy to do? Is it possible?

My initial thoughts:
It feels like it has the flavor of unsupervised learning, but all of the training data is labeled…
It feels like a thing that humans are very good at–specifically having “confidence that an object doesn’t belong to any known classes”.
Is there a popular term for this type of catchall bucket, so that I can google it?

Maybe what you’re looking for is OOD or Out-of-Distribution detection, which figures out if the input belongs to a class the model knows anything about or not.

this is covered here.

Thanks for the tip!
That does indeed seem to be a relevant search term. Other terms of interest I have come across now are training with abstention, and mahalanobis distance.

Any idea how this is or might be implemented in fastai?

Thanks for the link!

It seems to me that adding a threshold to confidence metrics is useful in problems with many classes, but a little dangerous when there are few (e.g. only two).
Unless I am misunderstanding what fastai is doing in your linked example, it’s saying something like this:
conf_threshold = 0.4
class_probabilities = [0.1, 0.7, 0.04, 0.16]
return class_probabilities > conf_threshold
>>> [False, True, False, False]

This means that an example that doesn’t trigger any of them choices strongly will likely end up with [0.25, 0.25, 0.25, 0.25] >>> [False, False, False, False]

This approach doesn’t work great if you only have two classes, because the sum in the square brackets always must equal 1, the output would be [0.5, 0.5] >>> [True, True]

Even if you increased the confidence threshold to say 0.6, you are still likely in the “splash zone” for noise, e.g. the class_probabilities could easily hit [0.65. 0.35] even though, given the chance, the ML might actually have rated it prob_class_A = .065, prob_class_B = 0.035, prob_class_NULL = 0.9

Am I misunderstanding how fastai uses that threshold value?

hope this thread helps.