Handle data that belongs to classes not seen in training or testing

You’d want to use lesson 3 Planets code for the sigmoid, as with planets we include an option of if “No labels” are present (blank). And just instead of multi-labeled it’s just single labeled.

2 Likes

I would be careful with the approach of trying to learn a label that represents “this image has none of the other classes”, as that is probably a very hard thing to learn, especially with a large number of classes.

I read through the suggestions in a few threads, and agree that multiclassification is the best solution to this problem. Here is how you do it, in code:

Add a label_delim parameter, even if there is only one label:

data = (ImageList.from_csv(path, "cleaned.csv")
 .split_by_rand_pct(valid_pct=0.2, seed=77)
 .label_from_df(cols=1, **label_delim=' '**)
 .transform(get_transforms(),size=224)
 .databunch(bs=64)
 .normalize(imagenet_stats)
); data

Change accuracy metrics to include a threshold:

learn = cnn_learner(data, models.resnet34, metrics=[partial(accuracy_thresh,thresh=0.95),])

For inference, supply a threshold parameter. If the probability of class is below the threshold, there will be an empty prediction:

prediction,_,probabilities = learn.predict(test_img, **thresh=0.95**)

HTH

1 Like

mrfabulous1 :grinning::grinning:

There’s been some theoretical work in this area under the name of open set recognition and open world recognition.

Open-set recognition(introduced formally in here) is a topic in Computer Vision whose goal is to train a model which is able to recognize a set of known classes with high accuracy and is also able to recognize when an unknown class is shown to the model.

Open-world recognition(introduced formally here) is an extension to open-set recognition and aims to recognize unknown classes, as well as help label them and then retrain the original model to recognize the new classes.

A recent survey on open-set recognition lists common approaches and discusses future directions for the topic.

One basic way to do it is to train a one-class SVM which recognizes each individual class and the ensemble of SVMs predicts whether the class is known or unknown

3 Likes

Hi gessha hope you had a wonderful day!

Thank you for a wonderfully informative post, the first link is a wonderfully clear explanation and the other papers really clarify the theme of this thread.

Many thanks mrfabulous1 :smiley::smiley:

1 Like

@mrfabulous1 @gessha have either of you tried any of the methods in Recent Advances in Open Set Recognition: ASurvey or Bayesian deep learning with Fastai to solve this problem.

I have been playing with the pets data set using multi-category classification with a sigmoid loss function (MultiCategoryBlock with BCEWithLogitsLossFlat) instead of softmax (CategoryBlock with CrossEntropyLossFlat) as suggested by Jeremy in Lesson 9 (refered to as DOC in the paper).

I removed n classes (breeds) from the training data and then used them as the test data to see how many of them would be classified as one of the training classes. If softmax is used (without a threshold) all the test images will be incorrectly classified as one of the training classes. When a sigmoid loss function is used the results are good unless the test image (unseen class) looks (has similar features) to one of the training classes.

In this notebook I removed the beagle and pug classes from the training data. Then when I classified them using the trained model, many of the beagle images are “wrongly” classified as basset hounds. Which from a quick inspection

seems understandable, I struggle to tell the difference between the two classes.

The reason I am asking about the alternative methods is because intuitively I wouldn’t expect a CNN (or even an expert in pets) to be able to distinguish between an unseen class that so closely resembles a class that a model is trained to detect (or expert has observed through their lifetime), but I may be wrong?

2 Likes

Hi cudawarped hope all is well!
Nice work :smiley:

I haven’t tried the methods you described yet as I am currently working on a big AI project. However your results are good.

I removed the beagle and pug classes from the training data. Then when I classified them using the trained model, many of the beagle images are “wrongly” classified as basset hounds. Which from a quick inspection seems understandable, I struggle to tell the difference between the two classes.

The reason I am asking about the alternative methods is because intuitively I wouldn’t expect a CNN (or even an expert in pets) to be able to distinguish between an unseen class that so closely resembles a class that a model is trained to detect (or expert has observed through their lifetime), but I may be wrong?

I agree with both your points above, in lesson 1 Jeremy talks about the difficulty of telling two types of cat apart. I don’t think its currently possible to tell two breeds or certain items apart if human experts can’t. I would think there must be some discernible differences to achieve classification. My own experience is the same as yours. This post, I did yesterday Lesson 1 Assignment - confused about results describes the difficulties, classes with similar images present.

I will try the techniques you mention when I next build a classifier.

Cheers mrfabulous1 :smiley::smiley:

2 Likes

Hi mrfabulous1,

The problem that I see, at least with dog breeds, is there is sufficient variation in the look of dogs of the same breed that some beagle’s look more like the “mean” basset hound than a beagle and conversely some basset hounds look more like the “mean” beagle than a basset hound, but I may be incorrect in this observation. In this case I can’t see that anything can be done.

In general case if the above isn’t true (i.e. beagle’s don’t look more like the “mean” basset hound, but are closer to the “mean” than some basset hounds), the probability from the Bayesian approach would tell me how far I am are away from this “mean” image. However I face exactly the same problem as I do with the output from the sigmoid. That is given the probability of a basset hound of 0.4 I still have to make the decision as to how to classify (unknown or basset hound). If I classify it as unknown (threshold 0.4) this will most likely reduce the accuracy of my classifier on known classes. This leads to the obvious question as to which is “better” the output from the multi-classification sigmoid or the probability from the Bayesian approach. I guess I will have to run some tests.

More generally I would like to find out which is the best solution to open-set recognition problem when applied to images.

1 Like

I don’t think its currently possible to tell two breeds or certain items apart if human experts can’t.

In the beginning of lesson 2, Jeremy shows an example someone did of classifying ~110 cities or countries (can’t remember which) by satellite imagery. This is a situation where I can’t imagine a human doing as well as the classifier did (I think it got something like ~85% acc). So I would suggest it is possible even if unlikely.

Hi knesgood hope your having a fabulous day!

You’re right about that example, however even though those maps look similar to us there are 1000’s if not millions of discernible differences if you compare the images so this is a trivial for an AI, but difficult for a human.
If you take a real Rolex watch and a Fake one which fools the Rolex watch makers themselves and the experts alike, then it is unlikely that an image classifier can tell the difference either.
In the previous posts bassets hounds and beagles can look so similar that experts can’t tell them apart, because there’s not enough discernible differences in their features.
This is shown clearly when you look at confusion matrix’s.
Also in one of Jeremy’s videos he talks about the difficulty or classifying a bread of cat that experts debate about on the internet because they are so similar, one side says it this breed and another side says it is the other, the classifier he built was confusing the two breeds also.

Cheers mrfabulous1 :smiley::smiley:

You, sir, make very valid points. You’ve also given me my homework for the weekend - Folex detector :slight_smile:

I agree with both of you but there is a difference between a classifier which sees all the classes in training and the open-set recognition problem.

From my understanding there will or at least should (my interpretation is that this is desirable and/or even the point of ml) be “unseen” features which a classifier will pick up on that a human may not and vice versa.

That said this may not always work, for example a resnet34 multi-class classifier trained on the pets data set confuses the unseen class of tench with Abyssinian cat. Looking at the two images below it is not entirely obvious why


however it becomes clearer from the below that the classifier is using texture and “sees” the tench’s scales as the pattern in the Abyssinian’s fur.

On the other hand when this does work, the “unseen” features should allow a classifier to distinguish between a beagle and a basset hound if both classes appear in the training data.

In the case of the beagle/basset hound conundrum, where the classifier doesn’t see any beagle’s in training, I assume classification accuracy can be helped if there is an “unseen” feature only present in basset hounds, but I guess generally this is going to depend on the feature and the weight the classifier places on it and not the function used in the last layer (softmax/sigmoid/openmax etc.) or probabilistic (Bayesian) approach.

1 Like

Hi @cudawarped, I haven’t tried the methods you mentioned as I’m still getting into the the open-set topics.

It makes sense that visually similar classes tend to get confused. The reason for that is general image classification algorithms are trained to recognize classes that are visually different from each other (course-grained classification) and not visually similar to each other (fine-grained classification). There’s model modifications that we can do to tackle each type of problem but I’m not sure if there’s a general approach that handles both.

In the cats and fish example, we see a confusion between the two classes due to another characteristic of current methods: the networks we train sometimes learn simple local features that don’t generalize well. That is due to the fact that we accept any supervision signal to improve absolute model accuracy instead of trying to find features that are more global. An interesting article on this topic

If you want to improve the accuracy, I’d suggest trying self-supervised pre-training or self-supervision as an additional network task because it’s found to improve generalization.

1 Like

Hello Everyone I hope your all having more fun than you can handle today!

I have been following A walk with fastai2 - Study Group and Online Lectures Megathread being run buy muellerzr and we covered a little bit about dealing with classes that your classifier hasn’t been trained on.
https://colab.research.google.com/github/muellerzr/Practical-Deep-Learning-for-Coders-2.0/blob/master/Computer%20Vision/03_Unknown_Labels.ipynb

If you look at this notebook you will see a way of using multilabel classification to recognize images that your classifier has not been trained on.

It looked good to me.

Cheers mrfabulous1 :smiley: :smiley:

2 Likes

Shameless self-plug here:

I wrote an article on the topic of Open Set Recognition which is relevant to this post. It’s a general introduction to the topic with examples of what it means for something to be unknown.

I am planning on posting a follow-up post with practical intro to the field with code examples.

If anyone is interested you can read the article here and if you have any feedback on the writing you can DM me here or on Twitter.

Thank you for your time!

1 Like

Hi gessha hope your well!!
Nothing wrong will a little seff-plugging, I heard a quote once which went " Early to, Early to rise, its not much use if you don’t advertise" :smiley:

I enjoyed reading your post and was intrigued that the subject uses the same principle as https://en.wikipedia.org/wiki/Johari_window. to help classify the types of unknown classes. Looking forward to your future posts which may contain a few easy :smiley: code examples.

Cheers mrfabulous1 :smiley: :smiley:

1 Like

Nice demonstration in colab, but as far as I can tell the approach is the same as that described by Jeremy in Lesson 9

Does muellerzr go into any alternative methods for solving this type of problem in his study group?

Hi cudawarped hope you are having a jolly day!

I haven’t seen any other techniques to do this, however the course is not finished yet.

The course is basically based on Jeremy’s work so this is expected.

There may be many better ways but I think this is easy to implement in terms or effort and expense and definitely beats doing nothing at all. As many people I show models to, are not impressed when they enter a cat into a car classifier and it says its a Porsche :smiley:.

Do you know of any simpler methods?

Cheers mrfabulous1 :smiley: :smiley:

My mother agrees with you, this is not an intelligent system if we have to jump truogh hoops to get it working.

1 Like