A helper for training epochs with different learning rates

Following the Jeremy’s approach on training with varying learning rates, I often found myself fiddling with a small rate, then bumping it up and slowly lowering it. I wrote a quick helper for defining a list of (# epoch, learning rate) pairs that get fed into a LearningRateScheduler callback using during training.

I know there’s lots of better ways to do this. But having more explicit changes in learning rate has been helpful in me understanding how loss, etc… progressed over the course of course of training. Also helpful if you’re experimenting with different models, data augmentation, etc… and want to make sure you train on the same progression. (especially If you’re like me and always manage to miss changing one)

It’s been useful to me. Maybe it will be to you.

Gist with function definition and example usage attached:

3 Likes

Thanks for sharing!