Conflicting statements on train/val loss in 2018 and 2019 course

Hi all,

Once again a question regarding training and validation loss.

In lecture 2 (2019 course) @jeremy states at 51:35 minutes:
“Many people, including people who claim to understand machine learning, tell you that when the training loss is lower then the validation loss, that you are overfitting. However this is ABSOLUTELY NOT TRUE”

However, in lecture 2 of the 2018 lecture 29:50 he says exactly the opposite:
“And overfitting would mean that the training loss is much lower then validation loss”

Would be really great if somebody could shed some light on loss on training / validation set. Its relation to the metric / how it (should) change with epochs / overfitting etc.

Training Loss > Validation Loss:

  • Room for improvement (train more, increase LR, etc.)
  • I think of Training Loss < Validation Loss like a threshold, a goal to reach.

Training Loss < Validation Loss:

  • Is the difference too much? if not, then the model is where I want it to be in this respect.
  • “too much difference” is, of course, subjective but a practitioner’s intuition and trial/error helps here.

as a side note:
Mary Shaw gave a talk, Progress Toward an Engineering Discipline of Software:
The whole talk is nice and informative. 2 minutes starting from 4:40 (craft, engineering, science) I find useful.

hope this helps,
selçuk

1 Like

have a look at the messages above and below mine in that thread, which basically asks the same question…

1 Like