What does "tfms_from_model" need the pre-trained model you are using?

In reference to this line here:

tfms = tfms_from_model(resnet34, sz, aug_tfms=transforms_side_on, max_zoom=1.1)

Why does data augmentation need to know anything about the particular model you are using? I would think that it would simply need to know where the data is and the transformations you want to apply to it (e.g., resize, flip, rotate, zoom, etc…)


Different models “normalize” data in different ways. That is why it needs to know the model.


I got you … makes sense since I noticed we’re not passing in the channels’ mean and std information ourselves.

Thanks for the clarification.

I always thought normalization was a standard operation on images irrespective of the model. Can you please elaborate on this, or point me to some reading material which explains why this is so?

1 Like

Preprocessing for VGG models is done by subtracting the channel mean whereas the preprocessing for Inception models(Resnet uses the same preprocessing) uses the formula ((x/255.)-0.5)*2 . We basically follow the same preprocessing step which was used by the authors of the original paper.

I am not sure about any place where there this is all collected but you can read the original paper of the model to get the information.


@ar_ai is correct. The library handles that logic for us.

1 Like

Somehow i still don’t feel the question is being answered exactly. Yes, different models have different preprocessing, but here data augmentation, including the max_zoom, are not preprocessing. Data augmentation is a way to train on more various data, given limited train data. So my personal take is the function stores data augmentation parameters here, for later use in training.

Correct me if i interpret wrong.


I may not be correct, but here’s what I think.
tfms stands for transformations. It’s not only about data augmentation, it’s about getting the data ready to pass to our model.(normalization and resizing for example). Since we ‘transform’ data we might as well do data augmentation in same place. That’s why tfms_from_model take a parameter which is responsible for data augmentation and that parameter(aug_tfms) is not in any way dependent on the model we’re using.


yeah, agree with @bushaev , the data augmentation is ok to use on any type of model.

Yup that sounds about right to me :slight_smile:


Does aug_tfms transform our original image based on the parameter passed to it and thus increase the data set ( in a way ) since we only provided the original image to the model ?

If you are asking if it will add more images in the file system, the answer is no. Easy way to check is do a count of images in your /train directories, run your model with augmentation, and do another count.

It just creates a bunch of augmented image at runtime as you train your model.


Yes thats what I wanted to know. So, it acts as if we have more images in the data set but in reality these images are being created as the image is passed through the model without storing them permanently, right ?

1 Like

You got it.

Thank you for your time.

If I want use the function tfms_from_model, which package need to import?
I would greatly appreciate for your reply!

Anyone can tell me what the corresponding method in fastai 1.0 for tfms_from_model?