(This is my first post to this forum so hopefully I did this correctly)
I noticed in the video lecture 3 that the when Jeremy used data augmentation and reran the learning it still reported the same number of samples as in the earlier run. My understanding of data augmentation is that it would “augment” the data set so you could effectively increase the volume of your data set without actually having to collect more data from an external source. As such, I expected that the data augmentation would result in a larger training set. So for example if the augmentation was tilting the image by a certain angle I would expect the training to be done on both the tilted and untilted image. Why is it that we don’t continue to retain the original image in the for the training?