I see that in fastai, when layer groups are frozen, the BatchNorm layers are carefully left with requires_grad True.
If I explicitly use the Pytorch function to set “requires_grad(layer,False)”, the BatchNorms are set with require_grad False. Is this going to affect training?
IOW, can I freely use “requires_grad(layer,…)” to freeze and unfreeze layers?
Thank you!