Aren't additional transforms supposed to increase your training set size?

I trained a model and it took about 2 minutes per epoch.

Then I added flip_vert = True

to get_transforms() and it’s still 2 minutes per epoch. I was hoping to double the size of my training set by duplicating all of the images and flipping them upside down but if that happened, it should be taking 4 minutes per epoch.

How do I make my originally intended behavior happen?

Not sure how it goes in fastai, but often the transforms are applied on the fly. As in, you still have the same number of training images, but they are randomly transformed for each epoch.

Ok, well does fastai support the ability to create additional images via transforms?

1 Like