Fine-Tuning Fastai-Learner

Can we use the encoder of a fastai trained classifier to fine-tune for more classes.And this way keep on increasing for more classes.

Thanks for any help.

In theory, this would be possible. You can save the encoder and add a new model head each time you add a new class. I am curious why you thought of this. However, I am skeptical if this approach will be useful. My concerns are:

  • The new model head would be randomly initialized (at least the last layer), so each time you add a new class, the weights for all classes have to be re-learned.
  • The features your encoder needs to extract would need to be adapted each time, you change the classes. But maybe you will produce some dead units during training, as those units are not important for the classes you are currently training on. However, these units might become important for later classes.
  • Hyperparameter tuning gets harder, as you essentially have to repeat it each time you add a new class.
  • It is harder to get a feeling for a good model. If you tune on the same data, you know which loss or metric corresponds to a good model. But with changing classes you can’t be sure if the improvement came because of a better model or because you added an easier class.

Thanks, Bres for yuor detailed reply.
I tried, and fortunately, I achieved an improvement of 30 % in accuracy, from 10 to 13 classes, for 1 fit_one_cycle.
I am going to use this progressively, as I have to account for 100 classes almost.

Thanks…