Should you revert weights if an epoch gives worse results?

If the validation accuracy has reduced after running an epoch, should you revert the weights to what they were before that epoch, before running more epochs?

Reasoning being that if the new weights are giving worse validation error then they are, by definition, a worse set of weights than before that epoch was run. So why would we keep them?

(Assuming random data augmentation / data training order, reverting then re-running an epoch will not just do the same again).

I think that looking at just whether the validation error goes this way or that way between epochs, especially for changes of relatively little magnitude, is not the right approach.

You want to pick up on a bigger trend, meaning are you overfitting? Is it past a certain point in training that your validation error continues to decrease? If such, that is a good sign! You might want to increase the regularization, decrease architecture complexity, increase your train set (data augmentation etc).

There is also the early stopping approach, where you stop learning just before the validation error goes up. In this approach you would retain the best set of weights somewhat like you describe.

But I think your question is more specific - you are asking about walking down the error surface. From what I recall, the optimization algorithms can be quite involved, meaning due the smart things that they do, it might be easier for them to walk down from the slightly worse weights on the validation set then from the better weights from the iteration before. But that is just a guess at this point.

Nonetheless, in practice, I still do believe that you probably would be better off focusing on the larger picture of what is happening - whether you are overfitting, where in the training you are, etc.

Oh man, haven’t done deep learning for quite a while now so please take my advice with a grain of salt. Would be interesting to see what others have to say on this though.