Language Model Zoo šŸ¦

Hi! Iā€™m sorry if this is a n00b mistake. Iā€™m using @lesscomfortableā€™s Spanish LM, which he so graciously has a GDrive link to on the linked GitHub repo. However, in general, is something like the fwd_wt103.h5 model useful without the corresponding itos_wt103.pkl?

That is, without mapping the classification taskā€™s vocab to the LMā€™s vocab, would we get any benefit?

I think you are right Sam. Iā€™ll upload the itos file tomorrow so people can use it with the model. Iā€™ll also answer your other Github question tomorrow.

2 Likes

Sorry, I know this is very late, but I had this same problemā€¦ it seems that, regardless of what language you are working on, fastai requires that language and English. So just !python3 -m spacy download en ā€¦ and that will fix it. :no_mouth:

@lesscomfortable - once I had the itos file, everything worked great. Accuracy is almost as high as the English classifier. Thanks!

1 Like

Thatā€™s good to hear! If you are going to make any modifications to improve performance, please let me know and we can include them in the repo.

With the German Language Model by @t-v i tried to classifiy emails as part of a project. After fitting the last layer and running lr_find with the default fast.ai code I get following plot:

I only have ~1.300 mails as training set, might this be the reason for the unusual looking lr_find plot or did i mess something up along the way?

Best,
Fabian

1 Like

I have a training set with around 5k examples, and my optimal learning rate is usually around 10^-1

ULimFit for Hindi

State of the Art Perplexity for Language Modeling

New Dataset for Hindi Text Classification Challenges:

BBC News Dataset

Call for Help

I am looking for contributors and help take this further, specifically: experiments to compare ULimFit against other classical and Deep Learning based Text Classification approaches.

Please open a Github issue!

3 Likes

Hi @mollerhoj!

I saw in the first post that you were working with the Swedish model but in the post I found from you it says Danish and Norwegian, am I right?

Tell me so we can collaborate on the Swedish one or I can start working on it! :slight_smile:

Francisco

I did try to find some state of the art, but I seemed really hard - either the dataset had language quite different from Wikipedia (my impression is that Twitter datasets seem to contain a lot of colloquial terms at least for the sb10k corpus referenced above) - or the benchmark wasnā€™t clear to me. I donā€™t know how ULMFiT does on Germeval-2017 linked above, it might be good to test that . @rother or @MatthiasBachfischer might now something more.

Best regards

Thomas

Hi,

Iā€™ve updated fast.ai library with ā€œgit pullā€ and the following error began to occur:


NameError Traceback (most recent call last)
in ()
----> 1 m = get_rnn_classifier(bptt, 2070, c, vs, emb_sz=em_sz, n_hid=nh, n_layers=nl, pad_token=1,
2 layers=[em_sz
3, 50, c], drops=[dps[4], 0.1],
3 dropouti=dps[0], wdrop=dps[1], dropoute=dps[2], dropouth=dps[3])

NameError: name ā€˜get_rnn_classifierā€™ is not defined

Any ideas?

Thanks,
Monique

There is a typo in the source code for some versions and I have encoutered the same issue. Quick fix is to either call get_rnn_classifer or to define a new function with get_rnn_classifier = get_rnn_classifer.

Thanks, @yelh!

Hi @pandeyanil

I can help you with Hindi and Sanskrit.

Could you please guide me on how to start?

@shankarj67 if you havenā€™t yet check out the http://course.fast.ai/lessons/lesson10.html , Jeremy shows there how to train and use the language models.
Once you are ready to start there are also scripts that Jeremy and Sebastian created for the ablation studies, they are quite useful as with just command line parameter changes you can train your model, they have pretty good documentation here:
https://github.com/fastai/fastai/blob/master/courses/dl2/imdb_scripts/README.md

@t-v, @MatthiasBachfischer, @elyase, @rother, @aayushy

GermEval 2018 has some pretty well-suited tasks for ULMFiT : Classification and Fine Grain Classification. In case you arenā€™t taking part in the competition already, we can train the ULMFiT with sentence piece on the competition data and we will be able to compare the results on September 21 (the workshop day).

If you took part in the competition and won, can you share your paper or provide an appropriate citation?
We won 3rd task in poleval 2018 using ULMFiT with SentencePiece as the tokenization, unfortunately, the task was just about creating language model so we couldnā€™t use the transfer learning. Iā€™m looking for an example where Sentence Piece + ULMFiT achieve SOTA in down-stream tasks to justify our claims in the paper.

1 Like

If you take part in any competition with your LMā€™s one thing that helped us the most was to try many different parameters on a very small corpus (10M of tokens) thanks to this we could check 53 combinations in just under a day.

1 Like

Iā€™ve been training a LM on clinical/medical text using the MIMIC-III database and things have been going really well. The initial model I completed today (~13 hours of training time) had a perplexity of ~15 on the validation set, with an average accuracy of 60% in predicting the next word.

The initial model is a word-level model that uses the tokenization methods from the course, this will be my base that Iā€™ll use to compare different tokenization methods/hyper-parameters against.

The initial results seems too good to be true to me, so Iā€™ll be digging into it a bit more to see if thereā€™s some area where Iā€™m allowing for information leakage, or if itā€™s gotten really good at predicting nonsense (for example thereā€™s a lot of upper case in my corpus, so I wonder if itā€™s gotten really good at predicting the uppercase token). Iā€™ll need to do some more research as well to see if thereā€™s published papers that I can compare results against.

All in all itā€™s pretty amazing how quickly Iā€™ve been able to set this up and get things running, thanks to everyone in this thread for sharing their work and thoughts. Iā€™m writing up a blog post about what Iā€™m currently doing and will share soon as well.

4 Likes

We have an entry for Germeval (binary task only) but Iā€™m fairly confident that it is not that great. Unfortunately I saw the competition late and had a very heavy workload towards the end that clashed a bit with doing more. Additionally there were some technical difficulties towards the end (heatwave in Germany + computers that crunch for 3-4 days = bad combination). We deliberately kept it very vanilla ULMFiT so I just used a 50k token Wiki German LM, about 300k self collected unlabeled Tweets and just the provided training data. No ensembling. The LM and the Twitter model are pretty decent I think (<28 perplexity and <18 perplexity respectively). The classifier eventually converged (I underestimated this step) and we got an F1 of about 0.8 on the validation set which Iā€™d been very happy with but a rather disappointing score for the test set. Iā€™ll discuss the final results after the event (itā€™s this weekend). If anyone else from these forums attends, shoot me a PM and letā€™s meet/talk :slight_smile:

Even with the very hectic finish Iā€™d do it again. Very many lessons learned. Iā€™m confident that the results can be improved a good bit and have some ideas but little time :slight_smile:

2 Likes

Letā€™s clean up and get ULMFiT working on our languages

Jeremy gave us an excellent opportunity to deliver very tangible results and learn along the way. But It is up to us to get our selves together and produce working models.

I know that ULMFiT is a beast (sometimes), you need tones of memory and it taking a full day of worming your room just to see that the language model isnā€™t as good as you wanted. I get it, but it is how deep learning usually feels :slight_smile: if it was easy there wouldnā€™t be any fun in doing this.

But we are so close. Letā€™s get it done!

How about a multiple self-support groups?

I mean a chat where are ppl that work on the same language model. People that care that your model got perplexity of 60, and they understand if that is good or bad. And can offer you an emoji or an animated gif.

A support group == a thread for each language.

If you are in, vote below to join a language group and start training.
The first person that votes should create a thread and link it to the first post above (you have 3 votes):

  • Bengali
  • Chinese (Simplified)
  • Chinese (Traditional)
  • Danish
  • Esperanto
  • Estonian
  • Finnish
  • French
  • German
  • Hebrew
  • Hindi
  • Italian
  • Indonesian
  • Japanese
  • Korean

0 voters

  • Malay
  • Malayalam
  • ** Medical
  • ** Music (generating music in the style of Mozart & Brahms)
  • Norwegian
  • Polish
  • Portuguese
  • Russian
  • Sanskrit
  • Spanish
  • Swahili
  • Swedish
  • Tamil
  • Telugu
  • Thai
  • ** isiXhosa4

0 voters