I have worked through lessons 1-3. It seems that Jeremy doesn’t use cross-validation when training models. I learned to use cross-validation when I studied ML a long-time ago.
Is it introduced later, or is there some specific reason that it is less useful with deep learning?
@jeremy I came across this Andrew Ng talk last fall in which he propose an Applied Bias-Variance for Deep Learning Flowchart for building better deep learning systems. Specifically he split the data into Train, train-val, val, test four parts, and develop an approach for the next step by locating the maximum error margins. I am curious about your thoughts on this.