Is lr_find modifying the weights of the model?

Is calling lr_find affecting the weights of the model ?
Since many notebooks exemples seem to reload the model’s state as it was before calling lr_find and then only train it further.


The lr finder saves the current weights as it starts. Then it does serveral training attempts and takes note of the various responses in terms of loss (this obviously alters the weights). Finally, it reloads the weights it saved as first thing.

(@bmetge: note that in English they say “modifying”. It would be better to correct the title)


On a relate note, is lr_find() resetting the weights for each learning rate?

Intuitively it should, so that losses across learning rates can be meaningful compared. But from a quick look at the code it seems that the optimization keeps running for different learning rates without resetting the weights in between. The weights are reset only at the end:

Am I missing something? Is this how it is supposed to work?

Looking at the code I suppose you are right. Better to wait for a more informed answer, though.

(I agree with you when you say they should be reset at each lr increment step to obtain a more meaningful response)

Because it starts from very low learning rate and perform only few (100) iterations,
statistically it probably doesn’t matter if you reset or not weights.
Resetting weights would make this process longer.