Hello Everyone,
While retraining our pre-trained model(originally trained for Task A) for let’s say Task B, it is established that in transfer learning we retrain all the layers of the models if we have a large dataset for Task B on the other hand if we have a small dataset for Task B we freeze the earlier layers of the pre-trained model and retrain only the head layer(s). In lesson 1, when we are playing around with Oxford-IIIT Pet Dataset to create a dog vs cat classifier. There we used a pre-trained resnet34 model and took advantage of transfer learning using the fine_tune
method of the fastai library. It is stated in the fastbook that the fine_tune
method retrains the whole architecture such that in earlier layer weight change is slow and head layer(s) weight change is fast. So I wanted to ask: How can we change the number and type of layers added as head layers in the existing cnn_learner
/fine_tune
method? How can we customize the number of layers and their type in the head layers? Also is there any way to freeze the earlier layers instead of updating the weights of all the layers of the pre-trained model while fine-tuning(which I think is how learn.fine_tune
seems to work)?
Thanks
PS: It is also mentioned in fastbook that the head layer need not be a single layer but can also be a collection of layers