The method

`learn.lr_find()`

helps you find an optimal learning rate. It uses the technique developed in the 2015 paper Cyclical Learning Rates for Training Neural Networks, where we simply keep increasing the learning rate from a very small value, until the loss starts decreasing. We can plot the learning rate across batches to see what this looks like.

The above statement is from the notebook (lesson 1). What is numerical value which represents to very small value and is there any threshold untill we are calculating loss and noting learning rate?

Also when I run the statement multiple times `lrf=learn.lr_find()`

, it stops sometimes after 81%, sometimes after 68%. Why the difference?