I was reading a PyTorch tutorial on Transfer Learning, where they state:
Two major transfer learning scenarios looks as follows:
1. Finetuning the convnet:
Instead of random initializaion, we initialize the network with a pretrained network, like the one that is trained on imagenet 1000 dataset. Rest of the training looks as usual.
2. ConvNet as fixed feature extractor:
Here, we will freeze the weights for all of the network except that of the final fully connected layer. This last fully connected layer is replaced with a new one with random weights and only this layer is trained.
From what I’ve learned, I would have thought that #2 above would have been called Finetuning.
As for #1, it sounds a bit unfamiliar. Would the only reason to use this first type be to speed up learning times by avoiding starting with non-random weights? If I have this correct, then I have to wonder how useful this is.