High training loss but high validation accuracy

I was playing around with a pre-trained model and I have observed that after 1 epoch of fine tuning I see that the training loss is higher than the validation loss which might suggest that the model is still under-fitting. But what surprises me is the higher validation accuracy.

To be more specific:
train_loss = 0.06
valid_loss = 0.02
valid_acc = 0.99

Any thoughts on this folks.

What exactly is surprising?

The validation loss is a proxy for the accuracy metric. If validation loss goes down, validation accuracy usually goes up. Underfitting means that more training will reduce both losses. :slightly_smiling_face:

So here is a rough interpretation of your model at this point, in my opinion.
Your model is most probably underfitting. But since it is highly accurate on the validation set, It may be possible that your dataset is very small, or skewed, or not well-divided into training and validation loss. Or your hyperparameters are chosen inappropriately.

1 Like