Tot_epochs in fit_one_cycle

I would like to run fit_one_cycle ten times, each with a cycle length of one. However, this code stops after one epoch.

RLR = ReduceLROnPlateauCallback(learn,patience = 2,factor=.5)
SAVEML = SaveModelCallback(learn, every='improvement')
learn.fit_one_cycle(1, [3e-6, 3e-5, 3e-4], tot_epochs=10, callbacks=[RLR, SAVEML])

Can anyone clarify?

This question was asked before, but received no responses.
https://forums.fast.ai/t/tot-epochs-in-fit-one-cycle/40855

Thanks!

1 Like

As the name indicates, one cycle doesn only one cycle. tot_epoch is just an argument that stops training before the end (I’m guessing for when you resume training at a certain epoch).

Then is there any way to do the equivalent of
learn.fit(10, [3e-6, 3e-5, 3e-4], callbacks=[RLR, SAVEML])

except using the fit_one_cycle policy?

Oh, maybe I can add the fit_one_cycle scheduling callback to fit(). Would that be a correct approach? Thanks.

You want to mix two incompatible learning rate schedules as far as I can see. Once you have decided on a given LR scheduler, the GeneralScheduler API can implement it, but you can’t have reduce on plateau with 1cycle together.