Image classifier with negative class

Hello,

I was wondering how would it be possible to train a negative class in an image classifier? By negative class (not sure which is the official name of this), I mean a class that denies the presence of the other class(es).

For instance, an important scenario was showcased in the Silicon Valley series with the “Not a Hotdog” app, which would indicate whether the provided image contained a hotdog or not.

So I wonder, how could the “not hotdog” class be trained? I’ve thought that maybe it could be trained with random images of anything, but not sure if that makes any sense.

Also, how could this be trained in a multiclass classifier? To provide some context, we could think of the teddy/grizzly/black bear problem exposed in lesson 2. Should we add a “no bear” class to the classifier? Or would it be better to pipeline 2 models (i.e., 1: bear vs no bear. 2: teddy/grizzly/black).

Thanks,
Dhanesh

Hi!
The terms I usually hear to describe this is showing negative samples or doing negative mining.
If you are looking into part 2 of the course you wild find jeremy talk about that in the context of object detection, because there you usually have this case a lot.

Creating a “no object” class is an obvious way to represent what you want here, but it probably is not the smartest. This “no object” class would be pretty difficult to predict, because you can find arbitrary looking images that are probably not a bear. So what you can do instead is do binary cross entropy for each of your classes, basically having the network output values for each

  • is it a teddy?
  • is it a grizzly?
  • is it a black bear?

Now if each of those output probabilities are close to 0 (or below a certain threshold), you can decide that it is probably none of them. This way you don’t have to encode everything that is not a bear into a class. Your labels for those no object images would then just be (0, 0, 0) in a one-hot-encoded sense.

How to determine how many non-bear images you need? Try out. When you incorporate a lot of those negative samples, you might want to look into focal loss, which is basically a nice way to deal with extreme class imbalances (very different amounts of samples between classes)

1 Like