Can you use methods like RandomResizedCrop to increase the size of your training set?
thank you so much
You will have one new version of it per epoch, each time you iterate through your data.
Why do we have both batch_tfms are also applied on images aren’t they
How is the RandomResizedCrop be applied to validation data?
Based on my understanding, in each epoch, the image will be cropped to a small area of the image and zoomed in. I may be wrong though.
It’s a center crop.
The min_scale takes 30% of the image at the time… is there an industry percentage that is recommended to use here?
How can we add diffferent augmentations for Train and Validation set?
My understanding is:
batch_tfms applies the SAME transform to all images in your batch.
item_tfms applies different transforms to all images in your batch.
See: https://forums.fast.ai/t/lesson-3-official-topic/67244/64?u=joshvarty
This is effectively what it’s doing. The model sees a lot more variations of the input images than if you didn’t have this randomized transform.
No, it takes a random percentage between 30% and 100% of the area (usually we use 0.35).
what is the different between mult=2 or 1 or 3 etc?
The book states …
"On each epoch (which is one complete pass through all of our images in the dataset) we randomly select a different part of each image. "
But I thought the transformations were applied per batch … not per epoch.
Think any of these transforms be better than padding (adding empty useless pixels).
That is incorrect, batch_tfms can apply a different transformation to each image. Rotation for instance, or flip, are random per image.
On the validation set, we center crop the image if it’s ratio isn’t in the range (to the minmum or maximum value) then resize. Link is:
Documentation of RandomResizedCrop