Once again a question regarding training and validation loss.
In lecture 2 (2019 course) @jeremy states at 51:35 minutes:
“Many people, including people who claim to understand machine learning, tell you that when the training loss is lower then the validation loss, that you are overfitting. However this is ABSOLUTELY NOT TRUE”
However, in lecture 2 of the 2018 lecture 29:50 he says exactly the opposite:
“And overfitting would mean that the training loss is much lower then validation loss”
Would be really great if somebody could shed some light on loss on training / validation set. Its relation to the metric / how it (should) change with epochs / overfitting etc.