Implementing learning rate annealing in pytorch

I am looking to make a simple implementation of learning rate annealing in raw pytorch, partly to learn and partly because I have some unusual data that is difficult to work into fastai.
I looked at the fastai code and was not able to easily reconstruct the functionality.

My question is: can learning rate just be updated every so often in the training loop by a simple assignment to optimizer parameters? If so, what would the assignment look like?

Thanks for any help!

Check out the lr_scheduler package in pytorch. It already provides several algorithms and you can just wrap them around your optimizer.

1 Like

Better check out this: https://github.com/bluesky314/Cyclical_LR_Scheduler_With_Decay_Pytorch super simple to use

1 Like