Validation loss starts showing at latter batches

Not sure if this has already been answered or not. My questions is why the validation loss starts showing up at a bit latter batches than the training loss? I am referring to the following figure. I am sure everyone must have observed it.

image

1 Like

Validation step is performed after the training step.

Validation step is to check performance of current network weights. There is no need to check it after every iteration when weights are constantly changing.
Training loss is counted in on_backward_begin which mean after each iteration. And validation loss is counted in on_epoch_end, ones after every epoch.

You may also have noticed that the blue line is more straight. Due to updates one per epoch

1 Like

Thanks for the explanation @Kornel :slight_smile: