Why Lesson 7 super-res make_features() calls clone()

Hi all,

I am trying to understand more details of fastai and pytorch.

One thing bugged me for a while is the Lesson 7 Super resolution notebook make_features() function.

I understand that it is getting the layers activation outputs from the pretrained VGG16 model, but the clone() call on the ground truth image I just couldn’t figure out the reason.

Just to clarify what I mean

Assume: input is the output from the UNET model, target is the Y content (or Gound Truth image)

  1. Why when we getting the target through the VGG activations, we need to store a copy of the tensors independently on memory? And we dont clone() for the input? I guess my question is why the clone() call is needed?

  2. This is what I guessed for 1st question, if we store a copy of the target features through the VGG16 model, next time when we call the make_feautres() again, pytorch knows where to find the weights so it won’t calculate it again. (My understanding is target doesn’t change and VGG16 activations won’t change, so you are basically just calculating something same every time)

If my guess is right, then why we don’t save this thing in the init() call like self.target_features = make_features(), then each call on the VGG model we just use it directly…

Thanks in advance :slight_smile: