Do i really look like a monkey: Unexpected image classification results?

Hi TauvicR Hope all is well!

The fastai book covers this subject in some detail.

In addition to the posts above I would look at the ethics notebook fastbook/03_ethics.ipynb at master · fastai/fastbook · GitHub where you can see plenty of examples of your precise experience from major companies, and fastbook/02_production.ipynb at master · fastai/fastbook · GitHub

Jeremy gives an example of a skin classifier with unexpected consequences.

Also in above notebook Jeremy talks about Avoiding disaster by having a deployment process that helps avoid deploying a model such as your bear classifier, that if deployed on an app might possibly upset a few people :cry: :cry:.

27. What are the three steps in the deployment process?

Out-of-domain data and domain shift are examples of a larger problem: that you can never fully understand the entire behaviour of your neural network. They have far too many parameters to be able to analytically understand all of their possible behaviors. This is the natural downside of their best feature—their flexibility, which enables them to solve complex problems where we may not even be able to fully specify our preferred solution approaches. The good news, however, is that there are ways to mitigate these risks using a carefully thought-out process. The details of this will vary depending on the details of the problem you are solving, but we will attempt to lay out here a high-level approach, summarized in

As part of my pipeline | go through the concepts laid out in these two chapters for every model I create.

Because if we the engineers, developers and scientists get it wrong, what chance have the general public.

Cheers mrfabulous1 :smiley: :smiley: