With augmentation the number of training images doesn’t change. What happens is that a random set of image augmentation transforms is applied to each image during training. So if you train for say two epochs. The network will see each image twice but each time the image will be transformed randomly, that is rotated, flipped, changed brightness, etc. See here for full list of default transforms Data augmentation in computer vision | fastai
Data augmentation refers to creating random variations of our input data, such that they appear different, but do not actually change the meaning of the data. Examples of common data augmentation techniques for images are rotation, flipping, perspective warping, brightness changes and contrast changes. For natural photo images such as the ones we are using here, a standard set of augmentations that we have found work pretty well are provided with the aug_transforms function.
So if you train for more than 1 epoch, each full rotation the model is going to see a “different image”. So for example, if your accuracy is 99% in the first 2 epochs it’s a waste to augment the data? I’m having trouble running a vision model that it needs more than 3 epochs to plateau itself and doesn’t get any better.