I want to transfer learning a la ULMFiT on Images. Transfering from a model trained with Siamese training https://docs.fast.ai/tutorial.siamese.html to a classification task.
With text models this is pretty easy because a TextLearner has these “save_encoder” and “load_encoder” methods. I would like something similar in a ImageLearner or something.
Doing this currently is quite nasty because the saved model from the Siamese Training has no classification headers. Therefore, the names of the layers are completely different and cannot easily be loaded.
General Motivation: Better supporting transfer learning even for image dataset would allow learning better from less labeled data. Currently this mainly works if the dataset is similarly distributed to Imagenet, such that one can use the pretrained models.