Language Model Zoo 🦍

Doesn’t seems so, start a new thread and lets get that figured out. We are slowly ready with the ulmfit implementation for fastai1 so you might want to start there. Please start a language thread if it isn’t already.

@Sarnthil, @Virgil,
Remember to start a language thread and share your findings! I will be definitely interested to see how Romanian is going.

Superb! make a language thread as well. I’ve learned hard way that low perplexity does not necessarily translate to downstream tasks even on English. so we need to find a good benchmark to see how your model performs. But results looks promising.

Awesome this is good result, and it is superb that you found an open data set for Japanse. Can you start a language thread like this one: ULMFiT for Malay Language Project

And put your results there, we can get cooperating and try to get a bit above the SOTA :), there is plenty of nobs to turn to get good results and I can run some training on spare GPUs once we get the scripts implemented in ulmfit-multilingual.