When I save a model using learn.save('first', with_opt=True) and then load it using learn.load('first', with_opt=True) to train the model for a few more epochs, the validation loss generally first increases for a few epochs. And the validation loss corresponding to the first epoch after loading, is usually higher than the loss at the time of saving the model.
Hey,
When using fit_one_cycle it is common for the test/validation error to oscilate a little bit. I do not belive the issue you are facing is because of you saving the model. The original paper also mentions this phenomena where the validation loss increases sometimes before decreasing again.
If you have ran the ResNet 50 model for pets classifier from Lecture 1, in my case at-least, the validation also oscillated a bit.: