Val_loss and val_acc

Hi guys, my first week here and my first post!

Though initially I thought I understood, I’m been struggling to see the relationship between loss and accuracy for the validation set. Saw a few similar topics in the forums, but no answer has helped me so far.

I would expect the see the loss to go down as the accuracy ramps up, however, I’m seeing scenarios where does is not the case (seeing scenarios where it’s as I expect too [2]). For example:

[1]

Epoch 1/5
23000/23000 [==============================] - 649s - loss: 0.3727 - acc: 0.9763 - val_loss: 0.2350 - val_acc: 0.9845
Epoch 2/5
23000/23000 [==============================] - 651s - loss: 0.3576 - acc: 0.9774 - val_loss: 0.2279 - val_acc: 0.9850
Epoch 3/5
23000/23000 [==============================] - 652s - loss: 0.3867 - acc: 0.9754 - val_loss: 0.2260 - val_acc: 0.9855
Epoch 4/5
23000/23000 [==============================] - 652s - loss: 0.3630 - acc: 0.9770 - val_loss: 0.2225 - val_acc: 0.9860
Epoch 5/5
23000/23000 [==============================] - 651s - loss: 0.3688 - acc: 0.9766 - val_loss: 0.2177 - val_acc: 0.9860

[2]

Epoch 1/1
60000/60000 [==============================] - 19s - loss: 0.1123 - acc: 0.9654 - val_loss: 0.0358 - val_acc: 0.9869

For example, submiting [1] to Kaddle for the Cats and Dogs problem, gave me a 0.10243 score. I did take into account chipping (used 0.5 and .95 and other parameters), so I don’t understand why that score, when the validation accuracy is 0.9766.

At some point, I suspected my validation set was not good, so I picked another random sample, also tried making it bigger. But I ended up with similar results.

I think my model might be being overly confident, and being penalized by the loss function, but I don’t understand why that happens, or how to fix it.

Thanks in advance for any help!

Do you mean clipping instead of chipping, and 0.05 instead of 0.5? Not sure why you are suprised by your kaggle score, it seems to be consistent with your training/validation loss (clipping makes the loss smaller).

The relationship between loss and accuracy is that the cross-entropy loss is a much smoother function than the accuracy. The accuracy measures how many times you predicted the correct class, whereas the cross-entropy loss takes into account all the prediction probabilities. So as a general trend, when loss goes down then accuracy goes up, but since the two functions measure different things they can go in different directions for a few steps.

Yeah, I meant clipping, and used 0.05, my mistake when writting the question :slight_smile:

Thanks for the answer!