Lr_find() How to deal with zig_zagging loss

What to do when the loss sort of jumps up and down all over the place?!


How do I pick a rate? Or does this plot indicates that at the current stage keeps running more epochs at any learning rate is no longer improving ?

I had a similar problem when running a tabular regression. I got a smoother loss curve by setting up the y_range parameter inside “get_tabular_learner”. That made easier for me to pick an optimal lr.

Is, by any chance, your batch size rather small? Try a larger one and see if that smoothes it out a bit.

1 Like