Understanding `precompute=True` with data augmentation

Can someone confirm if the following is correct from the network’s perspective…

precompute=True
I’ve seen this image before. I know there’s a feature (eyeball, top edge, circle, …) that exist in this image at this layer/node. Accordingly, I’m going to use the same activation of this node I’ve used before when I saw this image.

Data augmentation (transformations) with precompute=False
I’ve seen this image before - however because I’m also applying transformations to the image (data augmentation), the image is now slightly modified (rotated, cropped, …). The features I detected when I last saw the image are no longer in their previously detected spots. Consequently, previous node activation (that detect a particular feature) are no longer valid. The layers have to be “unfrozen” so that new activations are calculated. If the layers aren’t frozen - the only thing I could do is to use the cached activations that I previously calculated when seeing the original image before if they are available. I will calculate activations again for the top layer(s) regardless of whether the network is frozen or unfrozen.

Data augmentation (transformations) with precompute=True
I’ve seen this image before - however because I’m also applying transformations to the image (data augmentation), the image is now slightly modified (rotated, cropped, …). But since you have instructed me to use the previous cached values of node activation precompute=True- I’m going to only limit the recalculation of activation to the top layer(s).

I think even with precompute = False all but last layer remain frozen. But tbh I am also confused about how precompute parameter affects training.