How relevant are small differences in validation accuracy?

In Lesson 3 Jeremy explains different methods to avoid overfitting, like dropout and data augmentation. I wonder how relevant are small differences in the validation accuracy. How much better is a result of 0.9840 compared to 0.9875? Is this something you will notice at the end?

Depends on how many validation samples you use. If you validate against 10 samples, that won’t mean much. 1e6? That difference is probably real.

As your models get better, you probably want to use larger validation set sizes for each epoch to reduce the noise in your validation score.

1 Like