While training on some dataset for image classification, I noticed that the train_loss while training is more than the actual train_loss.
This is the train_loss and validation_loss at training -
Here the train_loss is 0.137 and validation loss is 0.178, but when I validate the model on the train set and the validation set I get a different loss for training set -
When I validated I got the train_loss as 0.0155 as opposed to 0.137 when training, on the other hand, validation_loss in same as that at the time of training.
Can someone explain why is this happening?