Sorry to dig up an old post, but this behaviour still happens in V2.
I was running a transfer learning model on a small image classification dataset, I reloaded the model after 10k epochs, and went through my pipeline where ran the lr_find function first, and use the suggestions to guide the one cycle policy. What I have found is that if the stop_div=True, then occasionally I had a case where the learning rate diverged after one step, so the LRfinder callback called it quits, the record logged only one lr and loss value, and therefore didn’t return a learning rate suggestion.
If I set stop_div=False, it worked fine… Here is what the learning rate finder graph looks like (if I run it with stop_div = False):
Note, This is actually using a model that has already been trained for 10k epochs on a small dataset, so maybe I shouldn’t be running the learning rate finder.
Would it make sense to flag an error in the lr find if it exits after one iteration due to divergence and therefore doesn’t return suggested learning rates? Should I submit this as a bug/ feature request?