Question on error_rate behavior in Chapter 5 (pet breeds classification)

Hi,

Back in the earlier days (i.e. 2017) validation loss increasing was deemed an indicator of overfitting (Lesson 2 discussion - beginner - #125 by jeremy, Lesson 2: further discussion ✅ - #37 by krash, Lesson 2: further discussion ✅ - #28 by marcmuc) as I was using that criteria as well for a while.

There were some confusion regarding the definition of overfit Lesson 8 (2019) discussion & wiki - #464 by miwojc but it seems it was a misunderstanding - maybe Jeremy’s use of “validation error” in the lecture was heard incorrectly as “validation loss.”

But, I think it is clear now (and Jeremy has been teaching for a while, Share your work here ✅ - #190 by jeremy) that the indicator of overfitting is decreasing validation accuracy.

Chapter 1 in the the book repeats it several times (fastbook/01_intro.ipynb at master · fastai/fastbook · GitHub)

However, you should only use those methods after you have confirmed that overfitting is actually occurring (i.e., you have actually observed the validation accuracy getting worse during training).

and right under that, he says …

If you train for too long, with not enough data, you will see the accuracy of your model start to get worse; this is called overfitting .

And - another vote for accuracy

and are more prone to overfitting (i.e. you can’t train them for as many epochs before the accuracy on the validation set starts getting worse).