Why don't we use gradual unfreezing when we fine-tuning the Language Model?

When we fine-tuning the Classifier we are using gradual unfreezing,and why don’t we use gradual unfreezing when we fine-tuning the Language Model?

@jeremy

1 Like

@Temkin114
Will wait for Jeremy’s response but I am assuming training the language model is different from training the classifier.
That is what ULMFit does in my eyes, we train the later layers to become a classifier while the earlier layers “understand english”. Therefore, there is gradual unfreezing to train the classifier carefully while when we train the language model, all layers can be unfreezed to “learn English”.

1 Like

@arora_aman

Thanks you for your answer.I have read that paper.
does that means we don’t need to be so much careful when we fine-tuning the language model compared to fine-tuning the classifier?