Lr_find for language learner: spot not always best

This is what I get for lr_find if I consider train and test folders in IMDB dataset
If I add the unsup folder, the suggested spot is near 1e-2. Does this mean the search starts from left? If so, I suggest it starts from right as it is (I guess) to locate the sharp increase in loss (or start at 10 for ex). One can still inspect the plot (or use more epochs if smaller rate is picked) but the idea is saving time and automating things further (also no guarantee very small rates will converge).

The suggested spot is the sharpest slope, it’s not from left to right, it’s jsut that in the example you gave, it is at 1e-5.
This example is exactly why this isn’t an automated tool and it needs human check.

Good to know. Thanks. Can we use other criteria (besides min gradient) - ex. length of interval where loss is decreasing and how close to min loss. This is visual thinking, not sure how easy to implement or if makes real sense.