MultiFiT English pre-trained model?

#1

Hi all, have any of you already pretrained the MultiFiT model on English Wikipedia data, using the method proposed in the paper by Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain, and Jeremy of the fast.ai community?

(MultiFiT is an improved, more efficient version of ULMFiT, with subword tokenizaton, QRNNs, 1cycle policy, label smoothing, etc. It seems that pretrained models are only available in other languages in the official repo.)

If you have pretrained it and you could share the weights, it would be much appreciated. Thanks in advance!

I’m planning to do some fine-tuning experiments with it with different resource constraints, on English classification datasets, comparing against other models (e.g. ULMFiT, BERT). If noone has pretrained this model on English Wikipedia yet, I’ll try to do so, although I have limited hardware access currently.

0 Likes

(Zachary Mueller) #2

I’m interested in following the discussion, but to my knowledge MultiFiT should be used on Multi-lingual tasks yes? (Languages outside of English)

1 Like

#3

I think the paper focuses on multilingual applications, but the method is not limited to it. It demonstrates a monolingual training approach like ULMFiT, where the language model is pretrained on a (non-English) language Wiki, then fine-tuned on a classification dataset of the same language, and finetuned with a classification head on the same dataset (this paper also has a cross-lingual approach though). So I think the monolingual approach is equally relevant for English and any other languages.

0 Likes