Turning off Data Augmentation

Hi,
I am training a standard image recognition model. Unfortunately, when I run learn.lr_find(), no valid loss is printed. Instead, this is printed:
%23na
Why is this happening? How can I rectify this?

Note: I have not passed any data augmentation. Can this be the cause?

I’ve seen that before - did you run learn.recorder.plot() to see the graph?
The LR finder is basically testing 100+ learning rates, so even if the first few show up as gibberish, you’ll usually still get a graph to work with.
Data augmentation wont’ affect things here…I think it just had some odd results on some of the lr’s it tested. I’ve seen it before though and as I recall, I could still get a plot to work with that was just smaller range.

Hi. Yes, it did plot the graph and it works fine. Just wanted to know whether this affects the performance of the model in any way

No, it won’t affect anything. LR Finder saves the model, runs it’s testing, then reloads it so you are exactly where you started before runnning the LR finder.
So no issues!

lr_find is only used to try to find best learning rate for training, so it doesn’t run over the test dataset, just training. You can see that it skips validation in the LRFinder callback: https://github.com/fastai/fastai/blob/87b41088dc02f1c37fdbc1b35c2df4f576f179ed/fastai/callbacks/lr_finder.py#L23

2 Likes

Oh! Thanks a lot:)

1 Like

Hi, I got the same thing as OP. Should I be worried?
It managed to plot the graph though?

I’m doing lesson 2 ( downloading bears atm ).

It manages to improve from 0.285714 -> 0.017857

Thanks

As I mentioned, lr_finder does not run validation set