I started some work on creating language models for Filipino (Tagalog dialect).
Using Tagalog page entries from Wikimedia, the current results for the best performing language model is:
Perplexity: 26.199 Accuracy: 0.440317
Note that the accuracy is calculated from the validation set.
Next steps are to use the sentencepiece tokenizer and to test it with Filipino (Tagalog) classification datasets.
If anyoneās interested, check out the project here.
I pretrained a language model for Japanese (including sentencepiece tokenization). > Thank you @piotr.czapla for your code and kind direction.
Details are here.
Iāve used the pretrained model for classification of disease-related tweets (MedWeb dataset) and achieved F1micro = 0.89, which is 0.03 points below SOTA.
Doesnāt seems so, start a new thread and lets get that figured out. We are slowly ready with the ulmfit implementation for fastai1 so you might want to start there. Please start a language thread if it isnāt already.
@Sarnthil, @Virgil,
Remember to start a language thread and share your findings! I will be definitely interested to see how Romanian is going.
Superb! make a language thread as well. Iāve learned hard way that low perplexity does not necessarily translate to downstream tasks even on English. so we need to find a good benchmark to see how your model performs. But results looks promising.
Awesome this is good result, and it is superb that you found an open data set for Japanse. Can you start a language thread like this one: ULMFiT for Malay Language Project
And put your results there, we can get cooperating and try to get a bit above the SOTA :), there is plenty of nobs to turn to get good results and I can run some training on spare GPUs once we get the scripts implemented in ulmfit-multilingual.
I think the idea is that the pre-trained model is trained on the whole language, and then fine-tuning to a domain would be done like in the IMDB example.
Hey. My dataset is a mixture of French and English and I have a classification problem. Can you give me some advice on using Ulmfit? Should I train a new LM on mixed French and English wiki? Thanks
In case someone will be interested in the future. On the Russian language the finetuning of language model with the same methodology as in leeson3-imdb.ipynb achieved the best result in all my experiments for now.
Another couple of questions:
In my intuition, we can achieve better result if we finetune language model on domain specific data with more training examples. In your experiments how big were domain specific corpuses?
Dose someone try max vocab of 100000 or more for LM finetuning step?
On wikitext-103 the model trains in Ā±18h on 1080TI
100k is huge, it makes it hard for model to learn useful relations between words for Russian you may want to use SentencePiece with 25k tokens, it works really well for Polish (better than sentence piece with 50k tokens, way better than 100k tokens).
You may check our paper & presentation there is an example that show how a different number of tokens influence the way a random sentence is being split.
looks like the english wikipedia dump will be 25-27 mio sentences when i have finished the script to remove āabnormal sentenceā. From my measurements one epoch will take 20 hours.
Iāve also trained a language model and classifier for Hindi, achieving a perplexity of ~35 on 20% validation set of 55k Hindi Wikipedia articles. Iām using Fastai v1 and Sentencepiece for Tokenization. I would like to compare our models on the BBC News classification dataset. Would you mind sharing your score?
@disisbig can you make a thread for you language and put it into the top entry? Re comparison we are in process of assembling the language models in one repository to ensure reproductability. https://github.com/n-waves/ulmfit-multilingual Do you want to contribute your lm and hyper Parmas?