Effect of Imagesize on the trainingtime

I am currently investigating the concept of progressive resizing. I have trained the Oxford Pets dataset with different image sizes and examined the training duration per epoch. In the process I have obtained an interesting curve progression: Whereby the training duration for a certain point in time fell with increasing image size. I repeated the experiment and got the same result several times.

It is also interesting that if I preprocess the whole data set to a uniform image size of 256x256 pixels, this effect already appears at a size of 128 pixels.

Can anyone tell me why? Is it possibly related to the time it takes to resize the images?

Without dissecting the entire process I think it would be hard to tell. In general I have heard of multiples of 2 training faster on GPUs. Besides that it may be that downsizing your images by 1/2 is faster for some reason, ie downsizing by exactly 2 might mean we need less math.

Those are just my ideas so far, you would have to investigate to determine what was causing this though.

It’s an older phenomenon :slight_smile: and the same effect on gpu or cpu with some sizes because of the processors’ cache sizes.

If you want more details, here: https://www.aristeia.com/TalkNotes/ACCU2011_CPUCaches.pdf

1 Like

Hi AmorfEvo hope all is well!
Great post!
Makes the complexities of caching a little easier to understand.

Cheers mrfabulous1 :smiley: :smiley:

Thank you very much for your answers.
@AmorfEvo I did not understand exactly how my training speed is related to the CPU or GPU cache. Especially I am not sure why the training time gets smaller again at a certain image size.
@marii I assumed as well, that it is related to the resizing of the images and if the original image size is exactly twice as large it would make sense that the time needed to reduce the image size by a factor of exactly 2 might be faster. However, I am surprised because the Oxford Pet dataset does not have a uniform image size and the effect can still be seen.
I made another similar attempt, but this time I resized the images for each different imagesize before training, so there is no resizing during the training.
The collapse is not visible this time. Can I now assume that the effect is related to the time used for resizing the images during the training?