To be more precise, he said when validation loss is getting worse. Sometimes your validation loss stays the same but your metrics still improve for instance.
TBH, the “notebook library” seems an old idea when I did that in 2017, I just needed to copypaste a few lines from the Jupyter docs, probably because other people had the idea before me and even wrote the documentation needed to do just that.
And I don’t know who invented the overfitting, but I think A. Kaparthy had it in his Stanford CS whatever course.
It’s not a new idea, but it’s still highly controversial. I’ve been assured, many times, that it is impossible to build up modules using Jupyter!
It’s many many decades older than that. However most advice about it still recommends checking for whether training loss is lower than validation loss, unfortunately.
i keep hearing ‘deep learning’ and ‘engineering’ a lot recently
first Leslie then Jeremy… deeplearning.engineering hmm…
are we moving to a new field within AI space? if so what would we need to change in current workflows and how?