Learning rate finder: wrong usage, or a bug?

Hello!
I’ve got a question, hope someone could help me out. Thank you in advance!

I have prepared my script for dog breeds example.
Before i do any training, i ran learning rate finder, and it shows this:
image

Based on that, i choose LR = 1e-1, and run 3 epochs with precompute, and 3 epochs without precompute:
image

Quite fast my model starts to overfit.
If i re-run it all with LR = 1e-2, results are somewhat better:
image
Is there anything i’m missing about LR finder method ?

You can’t rely on lr_find to avoid over-fitting. In later lessons you’ll learn more approaches.

1 Like