How to train a network to recognize 'is' or 'is not'?

I don’t know how to concisely word my question to google it so I just have to explain it.

Let’s say I want to train a network to recognize a specific person, Robin Williams for example, how do I train the network to distinguish him from other people? Assuming the input is an image file with potentially several people.

The problem I’m having is I think if I train it just picture of Robin Williams, the network would learn the general features of a person (nose, eyes, mouth, etc.) and say that any person is Robin Williams. If I use two labels, one is_robin and another not_robin, what would my not_robin training set consist of? I don’t think the not_robin training set can just be random people because there wouldn’t be a pattern to learn.

Maybe a neural network isn’t the correct structure to use for this kind of question?

1 Like

I would select my not_robin images to be the most similar as I would expect to get when deploying the model for whatever purpose it’s going to be deployed as. For instance, if you expect not_robin images to be consumed by your model that look like yearbook photos of people who are not Robin Williams, then you should try to source yearbook photos of people who are not Robin Williams.

I’m not sure why you came to the conclusion that deep learning isn’t an appropriate solution to this type of problem. It seems like the best solution to me from afar.

For instance, if you expect not_robin images to be consumed by your model that look like yearbook photos of people who are not Robin Williams, then you should try to source yearbook photos of people who are not Robin Williams.

Isn’t there the potential of the network learning to recognize the structure of yearbook photos instead of just people who are not Robin Williams?

The trouble I’m having is with it is that Robin Williams belongs to the people set so he’s a subset of person. But if I train the model with images of other people and he belongs to that set, then how is the model to say “yes this is that particular person” or “no it’s not him” when all people have the same basic features, because they’re all in the same people set.

A similar example, if I want a network to recognize a Toyota Camry out of an image set of other cars.

Would it be appropriate to train a model on just images of Robin Williams then under an arbitrary confidence level (.8 for example ) just say it’s not him?

Still trying to understand the problem…what is your goal in the Robin Williams classifier. To be able to classify an image of any thing in the whole world as either being Robin Williams or not? To be able to classify picture of people as Robin William or not? Something else? I think uour goal should motivate your choices of not_robin photos.

Yes, be able to recognize Robin Williams from and image set of anything. The set could contain images of the sky, cars, people, and the model would say if he’s in the image or not. Obviously I can’t train the network to recognize every conceivable object and situation, so how would it be done?

No, a neural network is a perfectly fine structure for this kind of problem, and yes not_robin can easily be just pictures of random people, lots of people have done just that and gotten good enough results.

Ideally you want the not_robin examples to be closer to robin examples. For examples if Robin is a white man make sure you have enough white male not_robins too, so that the neural network doesnot learn to recognize white men in general. This is a problem faced by black women in general,where the face classifier is 30% less accurate for them as compared to white male faces, as @jeremy mentions during the lecture itself.This is actually a part of rason why @rachel encourages greater diversity in AI.
I recently saw an example of that right here on the forum, where a guy was trying to build Donald Trump - not Donald Trump classifier, and as most trump images were images where Trump wore a suit, the classifier wrongly classified many non_trump people wearing suits as Trump too.
But keep in mind that even with this mistake his classifier was running on above 90% accuracy, so it is still a perfectly good classifier.(Will provide a link if I find it)

In short, this is a perfectly good way to build a classifier just try to have non_robin examples as close to robin examples, though it is ok even if you don’t.

2 Likes

I hadn’t thought about that, it makes sense to use a not_robin training set that contains examples that are fairly close to the robin training set, I guess that would make it so the subtleties between the classes are better learned. Thank you for the reply.

1 Like

Good considerations, Jayam. However, my intuition tells me that while it’s good to have a bunch of white baby-boomer males in the not_robin set for the reasons you mentioned, if Joe’s model is going to be asked to classify random things in the world, like a cantaloupe, as robin or not_robin then the training set should have a healthy dose of not white baby-boomer males in the training set as well. Otherwise, when the classifier does run up against a cantaloupe or a thermostat or a dump truck at inference time, it will not know how to behave.

1 Like

Otherwise, when the classifier does run up against a cantaloupe or a thermostat or a dump truck at inference time, it will not know how to behave.

This is true. I took Jayams suggestion as a sort of addition to the ‘regular’ examples in the not_robin training set.

To deal with the random images such as a cantaloupe or thermostat, I think I have to consider the confidence level of the network instead of just spitting out the highest confidence class. For example, if I give the network an image of a tank, the network would not be as confident in the decision. So if it’s 0.62 for robin and 0.38 for not_robin then I could just say the confidence level is not high enough and therefore I will classify it as ‘not_robin’.

Simply put, if the confidence of the robin class is below a certain threshold (0.8 for example), then classify as not_robin, otherwise go with the highest confidence class.

Yes definitely,
What I was trying to say was that, with other images you should also have not_robins close to robins.