Two epochs being trained when one specified

Hi all, I’m working through Fastbook in Colab and when I run fine_tune(1) it seems to run 2 epochs but prints them separately. Below is an image of what I’m talking about, does anyone have any ideas why that’s happening?

Thanks,
Iain

fine_tune follows a transfer learning regiment: one epoch is always trained frozen (unless you specify a frozen_epochs=x), with however many epochs you pass in trained unfrozen with an adjustment to your learning rate applied.

3 Likes

Ah, so fine_tune handles both the frozen and the unfrozen training! I wrongly assumed that it was simply a replacement for fit_one_cycle and thought there was an issue with the configured environment.

Thanks for clearing that up Zachary!

If you check the source code for it, you will find something akin to:

learn.freeze()
learn.fit_one_cycle(frozen_epochs, lr)
learn.unfreeze()
learn.fit_one_cycle(unfrozen_epochs, lr) #this may be modified, can't remember OOTOMH

Jeremy and Sylvain found that usually one epoch is enough to update the unfrozen weights

3 Likes