I am looking to make a simple implementation of learning rate annealing in raw pytorch, partly to learn and partly because I have some unusual data that is difficult to work into fastai.
I looked at the fastai code and was not able to easily reconstruct the functionality.
My question is: can learning rate just be updated every so often in the training loop by a simple assignment to optimizer parameters? If so, what would the assignment look like?
Thanks for any help!