"fine_tune" vs. "fit_one_cycle"

In addition to what’s already been said:

I was figuring out the exact same thing tonight. Looking at the source code, is the easiest way for me to wrap my head around it (see below).

fine_tune

is a

particular combination of fit_one_cycle(s) + (un)freeze(s), that works well in a lot (if not most) situations...

from https://github.com/fastai/fastai2/blob/master/fastai2/callback/schedule.py#L151


def fine_tune(self:Learner, epochs, base_lr=2e-3, freeze_epochs=1, lr_mult=100,
              pct_start=0.3, div=5.0, **kwargs):
    "Fine tune with `freeze` for `freeze_epochs` then with `unfreeze` from `epochs` using discriminative LR"
    self.freeze()
    self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
    base_lr /= 2
    self.unfreeze()
    self.fit_one_cycle(epochs, slice(base_lr/lr_mult, base_lr), pct_start=pct_start, div=div, **kwargs)
21 Likes