Sometimes, when I specify that I want the training to last for
n epochs in the
fit_one_cycle method, the training goes on for
n+1 epochs. See the example in the picture. I have set the number of epochs equal to 3, but it seems I get 4 epochs of training.
That is something that I am experiencing both in version 1.0 and 2.0 of fastai software.
Without seeing the code of your
do_fit function, it’s hard to understand what’s going on.
Sure, it’s nothing much. Something like that…
def do_fit(bs, sz, epochs, lr, freeze=True):
if learn.opt is not None: learn.opt.clear_state()
The learn variable comes from this:
learn = cnn_learner(dbch, xresnet50, loss_func=loss_func, opt_func=opt_func, metrics=metrics)
Never mind, it was a silly bug. I forgot the
else after the
if statement. So, after the
fit_one_cycle with the model freezed it moved on to
fit_one_cycle with the model unfreezed. Thus the one epoch became two epochs…