I’m at lesson 3 now and there is one thing that I can’t get my head around. I’m afraid that if I don’t overcome this issue, I can’t understand anything ahead.
Why do we need to fit_one_cycle
at 4 epochs, then save
, then unfreeze
, then find the learning rate (lr_find
), then fit_one_cycle
again?
What is the purpose of unfreeze
? And why after unfreeze
and fit_one_cycle
again, we usually have a better result?
Why the fit_one_cycle
doesn’t do all these things for us?
Why not just
learn.fit_one_cycle(10)
learn.save('final_result_here')
I know there is a post about this here Why do we need to unfreeze the learner everytime before retarining even if learn.fit_one_cycle() works fine without learn.unfreeze()
But reading it makes me confused even more.
Thank you.