Why does data augmentation need to know anything about the particular model you are using? I would think that it would simply need to know where the data is and the transformations you want to apply to it (e.g., resize, flip, rotate, zoom, etc…)
Preprocessing for VGG models is done by subtracting the channel mean whereas the preprocessing for Inception models(Resnet uses the same preprocessing) uses the formula ((x/255.)-0.5)*2 . We basically follow the same preprocessing step which was used by the authors of the original paper.
I am not sure about any place where there this is all collected but you can read the original paper of the model to get the information.
Somehow i still don’t feel the question is being answered exactly. Yes, different models have different preprocessing, but here data augmentation, including the max_zoom, are not preprocessing. Data augmentation is a way to train on more various data, given limited train data. So my personal take is the function stores data augmentation parameters here, for later use in training.
I may not be correct, but here’s what I think.
tfms stands for transformations. It’s not only about data augmentation, it’s about getting the data ready to pass to our model.(normalization and resizing for example). Since we ‘transform’ data we might as well do data augmentation in same place. That’s why tfms_from_model take a parameter which is responsible for data augmentation and that parameter(aug_tfms) is not in any way dependent on the model we’re using.
If you are asking if it will add more images in the file system, the answer is no. Easy way to check is do a count of images in your /train directories, run your model with augmentation, and do another count.
It just creates a bunch of augmented image at runtime as you train your model.
Yes thats what I wanted to know. So, it acts as if we have more images in the data set but in reality these images are being created as the image is passed through the model without storing them permanently, right ?