Why isn't cross-validation used in training?

I have worked through lessons 1-3. It seems that Jeremy doesn’t use cross-validation when training models. I learned to use cross-validation when I studied ML a long-time ago.

Is it introduced later, or is there some specific reason that it is less useful with deep learning?

Generally there’s enough data that validation uncertainty isn’t a major issue.


Generally… but some datasets arent so big. Is it hard to implent cross validation on vgg16?

No not at all! :slight_smile:

@jeremy I came across this Andrew Ng talk last fall in which he propose an Applied Bias-Variance for Deep Learning Flowchart for building better deep learning systems. Specifically he split the data into Train, train-val, val, test four parts, and develop an approach for the next step by locating the maximum error margins. I am curious about your thoughts on this.