Question about FastAI's pre-sizing vs. what other libraries do

In one of the earlier video lessons, Jeremy mentions that (as far as we know) FastAI is the only deep learning library which implements pre-sizing. I’m curious what this implies about how other libraries handle data augmentation.

Specifically, since the images are not resized to all be the same dimensions, does this imply that other libraries simply do all their data augmentation as item transforms one image at a time (as opposed to batch transforms as a single bulk operation)? If so, would these be done on the CPU or GPU?

As far as I know “presizing” is not done in tensorflow at least by judging from its official documentation.

The tf.keras API has a layer which does “resize & rescale” i.e. it resizes the images and then rescales the pixel value. Now, I am not hands-on with Tensorflow so I am not sure if this has the same effect as “presizing” done in fastai.

My guess is that it doesn’t have the same effect but then it’s just a guess based on reading the documents.

About data augmentation done in batch on GPU - The documentation about data augmentation in Tensorflow is not that clear if it’s done on one image at a time or in batches. However, data augmentation can be done on cpu with optional gpu acceleration by using the tf.keras.preprocessing layer