I always set the learning rate in the vision_learner, but the other day I watched Jeremy declare it in the fine_tune method. Using fine_tune(12, 0.01). If you check the vision_learner documentation you can also pass an lr parameter. Is there a real benefit of using it in the learner method?
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune??
Signature:
learn.fine_tune(
epochs,
base_lr=0.002,
…
The learning rate in fine_tune will override the learning rate set in vision_learner irrespective of whether a learning rate is passed or not since it has a default value(0.002). fine_tune invokes fit_one_cycle which does this override.
1 Like
if you declare the learning rate in vision_learner. Will, it gets overwritten by the base_lr in the fine_tune method?
Yes correct.
1 Like