Validation Loss vs Error Rate

I have recently submitted my solution for Kaggle competition and the notebook link for the same is:
https://www.kaggle.com/karanchhabra99/humpback-whale-identification

While using fit_one_cycle in the end I saw that Validation loss was decreasing while the error rate was increasing. Why is it so? And what does loss means?

And can someone also suggests ways to improve the code, as my rank on Kaggle is around 1400.

@karanchhabra99
Loss is simply a measure of the similarity between your the prediction from the model and the ground label of the data point.
A low loss doesn’t necessarily mean that the model knows how to classify well. Imagine this…
suppose I want to classify an apple from an orange. Suppose I have two images of apples, one with a blue background, and one with a green background. And i also have a picture of an orange with a blue background. When i calculate the loss directly over the images, the loss between the apple and orange with blue backgrounds is going to be lower than the loss between the images of the two apples.
This is a toy example. Even though in a convolution Neural Network, the data over which loss is calculated is not the images itself, but “features” from the images, but the same idea applies. That’s why you should rely on your metrics more than your loss (atleast in cases like yours).

Cheers, stay safe!

1 Like