Sometimes, when I specify that I want the training to last for n epochs in the fit_one_cycle method, the training goes on for n+1 epochs. See the example in the picture. I have set the number of epochs equal to 3, but it seems I get 4 epochs of training.
def do_fit(bs, sz, epochs, lr, freeze=True):
if freeze:
if learn.opt is not None: learn.opt.clear_state()
learn.freeze()
learn.fit_one_cycle(epochs, slice(lr))
learn.unfreeze()
learn.fit_one_cycle(epochs, slice(lr))
Never mind, it was a silly bug. I forgot the else after the if statement. So, after the fit_one_cycle with the model freezed it moved on to fit_one_cycle with the model unfreezed. Thus the one epoch became two epochs…