Lesson 1 official topic

The short answer is no. When you train a model it will find the easiest path to correctly answering the question and if the answer is always True it will learn that very quickly. You can run into this type of problem as well if you have very imbalanced classes. Ex: if you have 10 images of a Cats and 500 of Dogs the model will learn quickly that the answer is usually Dogs and may not generalize well. There are some techniques you can use to counteract this, for example by applying a much bigger ‘weight’ to Cat class when the loss is computed which increases the ‘penalty’ (larger loss) when it predicts Dog when the real answer is a Cat. An analogy is that the Dog questions are worth 1 point on a test and the Cat questions are worth 100 points. You can always add anything for the ‘negative’ class when training, but if it is not representative of the ‘negative’ examples the model would experience ‘in real life’ then it’s likely your model will not perform well when faced with negative classes in real life.

4 Likes