When i tested the model i also used a picture of myself and a 100% black image.
Myself: Prediction: white_headed_capuchin; Probability: 0.9944
100% black image: Prediction: bald_uakari; Probability: 0.4602
So its good i did not end up as a bald uakari. But im not amused.
Im sure that i don’t look like a monkey and expected a very low probability.
So is this to be expected? Does this type of model predicts that everything it has not seen before must be a monkey? What can i do to get a better result, (low probability prediction of being monkey)
Thanks for the super fast reply. I see its a known problem with many solutions. Thank i have to read and im sure it will show me where to go. I sort of expected the fast.ai library to help me out and make probability 0.0 for non-trained objects. But now i see that is not the case.
Might be good to put this in the course, or i have read over it.
Jeremy gives an example of a skin classifier with unexpected consequences.
Also in above notebook Jeremy talks about Avoiding disaster by having a deployment process that helps avoid deploying a model such as your bear classifier, that if deployed on an app might possibly upset a few people .
27. What are the three steps in the deployment process?
Out-of-domain data and domain shift are examples of a larger problem: that you can never fully understand the entire behaviour of your neural network. They have far too many parameters to be able to analytically understand all of their possible behaviors. This is the natural downside of their best feature—their flexibility, which enables them to solve complex problems where we may not even be able to fully specify our preferred solution approaches. The good news, however, is that there are ways to mitigate these risks using a carefully thought-out process. The details of this will vary depending on the details of the problem you are solving, but we will attempt to lay out here a high-level approach, summarized in