Repeated fit_one_cycle learning rates

Hello everyone,

I am trying to automate the training process for a model, so that once it is started it can continue on its own until overfitting. I was thinking to use multiple one cycles (of 3-5 epochs) to do this. My problem is how to choose a learning rate?

Should I:

  1. pick lr using lr_finder -> [train one_cycle -> decay lr by 0.9 (or some other value) -> repeat until overfitting]
  2. pick lr using lr_finder -> [train one cycle -> pick lr using lr_finder (automated somehow) -> repeat until overfitting]
  3. pick lr using lr_finder -> [train one cycle -> decay lr only if val_loss is not decreasing -> repeat until overfitting]

Or maybe some other way?

Thank you,
Andrei

The one that worked for me is:

  1. Train for a couple epochs without unfreezing,
  2. Find LR and get min_grad_lr (https://docs.fast.ai/callbacks.lr_finder.html)
  3. Use that to train again
  4. Steps below it would vary, but I particularly find using TTA useful

Have fun!

1 Like

Hello,

I understand that, but what would be the best way to automate step 2 you mentioned? Because finding the minimum gradient doesn’t seem to always work.

If that’s the case, you should do a per-case conditionals. Say if you’re on epoch X and your suggested learning rate is A, and you normally don’t land on that learning rate to get good result during X epoch, set out custom learning rate.