How to use Transfer Learning from a classifying model?

I have a fastai model (just as in the course) which is good at classifying news into certain categories. Now I will classify legislations within the same categories, however I have much less data on legislations.

How would you use the knowledge gained from classifying news to the legislation domain?

Thanks in advance.

1 Like

Hey Omar,

The classifier essentially learns a mapping from x to y - even if your new model has the same y, it has a very different x and so it’s not guaranteed that the news classifier will be useful for the legislature data. In my opinion, you should try both training a new classifier on the legislature data and fine-tuning the news classifier on the legislature data and see if essentially “pretraining” on news is helpful in this case.

2 Likes

Hello Orendar! Thank you very much for your kind answer. That’s great. Expanding your answer a bit:

Using fastai, how would you do this fine tuning of the classifier? In the fastai course video lessons, they only show how to make fine-tuning from a LM, not from a classifier.

Thanks in advance

Fine-tuning a new classifier from a pre-trained classifier should be pretty straightforward in fastai. After training your news classifier, save the whole model not just the encoder! So you would use learn.save() (see docs) instead of learn.save_encoder(). Also save your dataloaders from the news classifier.

Then you create your dataloaders for your target dataset, i.e. legal documents. Here it’s important to pass in the vocab from the dataloaders that were used during pre-training (same as when fine-tuning a classifier from a language model). After you create your learner you can load the pre-trained classifier using learn.load() and are ready for fine-tuning.

Note that the case described only works if your classes are the same during pre-training and fine-tuning. If you use different classes you can still follow the same procedure, but would only save and load the encoder without the classification head.

Alternatively, in case you would have a lot of unlabelled data in your domain, you could first train a language model and then fine-tune your classifier from there.

1 Like

Hi Stefan! Thank you very much. Your answer is great. I was completely missing the complete concept of pretraining.

Kind regards.

1 Like

Hi Stefan, can we use a fastai trained classifier, to fine-tune for more classes by using its encoder.
And this way, keep on increasing the classes gradually.

Thanks for any help.

Yes, that should work. But there are a couple of things to keep in mind. Your fine-tuning dataset should include some training data for all classes, not just the new ones. Otherwise the model could forget how to classify the “old” classes during fine-tuning. I would also compare this gradual fine-tuning approach to the standard approach (i.e. fine-tune a classifier from a pre-trained language model on your entire dataset at once) and see what performs better.

Thanks, Stefan.I tried, and it showed positive behavior.

One more thing in fastai1, we used to have DatasetFormatter, so we can access the most similar, most unsure, and top losses, that further can be used with ImageCleaner to clean dataset.

But in fastai2, there seems to be no DatasetFOrmatter, so how to access most similar, and most unsure.There is ImageClassifierCleaner, but when used it gives the widget to relabel/delete for all images, or max_n , but I need to focus on only that are misclassified.

Thanks, for any help.