ULMFiT - German

Results

Experiment LM Perplexity per Word Micro F1 on GermEval2017 task 1 timestamp 1 Micro F1 on GermEval2017 task 1 timestamp 2 Macro F1 on GermEval2018 binary Macro F1 on GermEval2018 multi
ULMFIT-sp30k: WikiDE+BTW17 157 0.765 0.781 0.719 ens:0.72077 0.4046
ULMFIT-sp30k: BTW17 14 0.758 0.743 - -
ULMFiT-vanila: WikiDE ? ? ? 0.69x
Naderalvojoud et al. (2017) SWN2-RNN - 0.749 0.736 - -
Sayyed et al. (2017) xgboost - 0.733 0.750 - -
TUWienKBS coarse 1 - - - 0.767 0.5142
uhhLT fine 3 - - - 0.7518 0.5271

Our repo: https://github.com/n-waves/ulmfit4de

Data sets and benchmarks

t-v
Benchmark: https://ofai.github.io/million-post-corpus/

Kristian, Matthias (source code)
Benchmark-1: Twitter Sentiment from April 2017: F1 (macro-average of pos and neg, ignoring neutral): 65.09
Paper: sb10k-Paper
Data: New sb10k Corpus
Benchmark-2: GermEval-2017 best results (micro-average F): synchronic: .749; diachronic: .750
Paper: Germeval-2017 Proceedings
Data: GermEval-2017 Data

11 Likes

@t-v, @MatthiasBachfischer, @elyase, @rother, @aayushy
How about we try to get the STOA for text classification in German and we try to beat the best models of GermEval together?

Could you describe what you were working on and what did and what did not work?

@rother, GermEval is about offensive language are you sure you have the right words in the Vocabluary? 50k sounds like a small number given the amount of words in German compared to English.
Do you know how many Out of Vocabluary you have? Such low perplexity may suggest that you have plenty of unknowns. (The more OOV the more reward model gets for predicting unk).

Can you share the 300k training set you collected, it will be quite useful to train a model on that.

Great idea and thanks for organizing, count me in. Can you share the sentencepiece implementation? I have access to relatively powerful infrastructure so I can help with the experiments. Without time constrain we can probably gather a relatively large German twitter corpus for the LM.

1 Like

Hi @piotr.czapla

thanks for taking the initiative here.

I very briefly looked at the S10K corpus when @MatthiasBachfischer pointed it out. My impression is that the language used on twitter is quite different from that in Wikipedia. This was also my impression for other twitter corpora.
I haven’t seriously tried the million post corpus.

I’ve also looked a bit into German QA but it didn’t work very well. I suspect that sentencepiece would be quite beneficial for German also here, but I never implemented it.

I do have a working LRP (relevance propagation) for ULMFiT which I think makes a great add-on for analysing and showing results.

I’d be happy to collaborate, that’s why I put out my training scripts and model in the public and I’m happy that a few people took it for a spin.
On the other hand, and I’m not sure how to say this politely, I was tremendously disencouraged when I read Note: research on state of the art is WIP, I’ll post resources/links/referenced papers once it is done in the state of the art section and submitted a bunch of patches to my favourite software projects instead.

I must admit that while twitter corpora are all the rage, I’m not sure what to think about them.

Best regards

Thomas

1 Like

@elyase good thank you for joining the effort. the repo with sentence piece is here https://github.com/n-waves/poleval2018, but let’s create another one and adopt the scripts, later we can make them more generic.
https://github.com/n-waves/poleval2018

I had a look yesterday there is a pretty large dataset 10GB 1GB of tweets from German elections 2017. I guess elections can be a good match with a lot of sentiment and emotions

@t-v, I’ve just copied what is in the German section of the first post in the Language Model Zoo, The “I” in that sentence is not me saying something it is just copy and paste sorry for the confusion :slight_smile: and for the way it made you feel.

I’m all for the collaboration and i don’t really care about hiding things, what’s the point?

@t-v I see what you mean, I’ve noticed that it had been posted a long time ago so you must know that it is an outdated message.

You are more for open collaborative work, which I totally support.!
Although when there is a competition people fear piggybacking on their work, and I guess this is what Kristian meant.

Fortunately:

The competition is over. now is collaboration time :smile:

For this work to have any meaning it has to be good and it has to be done in many languages. Then we can make a large paper with Jeremy, Sebastian and everyone involved and show how ULMFiT can help push forward NLP around the world :).

It would be super cool if you join the effort.

One side note: if I ever get discouraged by anything I say just state that directly and don’t worry about politeness. 99 of a 100 is a communication error on my side and I haven’t meant what other people read :slight_smile:

Guys,

Does anyone know how to make the first post editable?

You mean http://iphome.hhi.de/samek/pdf/BinICISA16.pdf, right?
Can you share the code somewhere, It will be awesome to see what model is seeing!

I’ve copied the sentence piece preprocessing pipeline to new repository

The poleval is still hardcoded there I will clean this up and make it more generic and run it on the twitter 10GB to train the model.

The repo works with our copy of fastai that was extended to include sentence piece models. I will try to get that merged to FastAI in some future, but for now, let’s use our copy.

It might be easier if we start working in the same repo so if you are interested let me know your github user names and I will add you as collaborators.

Btw. I’ve added first two experiments:

I would be glad to collaborate, count me in as well.

My work is mostly in transfer learning in text and in a wide variety of languages – I have Jeremy (and everyone else involved in the project) to thank for their contributions in this area. I’m currently writing a fastai style implementation of OpenAI’s Trasnformer Decoder which may or may not be useful for this task.

What I worked on: A proprietary dataset that had severe class imbalance. The language model training went pretty smoothly and I was quite happy with the perplexity score on the German Wikipedia corpus.

LM Training

What worked: The alternative cyclical learning rate (use_clr_beta).

The requirement was a smaller model owing to resource limitation, so I brought the embedding dimension down to 300 and the number of hidden connections to 1000. The perplexity (that I compare to @t-v’s 32) was 38.

What did not work: From the top of my head, varying dropout values had negligible effect.

Classifier training

What (sort of) worked: PyTorch’s WeightedRandomSampler to balance the dataset. The technique worked reasonably well for me for a moderately skewed distribution, but not so much for heavier skews.

What did not work: (Again, from the top of my head) addressing overfitting by varying dropout or the wd (weight decay) hyper-parameters.


My github username is dust0x.

1 Like

Hello everybody,

I would like to join you to explore this field and learn as I’m currently playing around with sentencepiece and ULMfit (but for non language data).

My GitHub name is the same as here in the forum.

I would be happy to join & best regards
Michael

@aayushy OpenAI transformer’s it is awesome project to work on count me in if you need a hand. It is second on my list, after I manage to make use of ULMFiT.
The only issue with Transformers is that they train for a month or something like that (I’ve heard that somewhere on Hacker news i haven’t seen this in the paper)

Good to know that clr_beta worked well, and thank you for sharing the detail of what worked. For Polish the thing that had the most importance was the Sentence Piece Vocab size and the number of layers 4 was better than 3 and 5 was worse.

@MicPie Cool. If you want some directions, let me know what you how comfortable you are with fastai, ulmfit python, etc. so I can point you to things that you could best help. Or alternatively, pick some experiments your self and bring back the results and trained models :slight_smile:

@MicPie @aayushy I’ve added you both to the repo, there is not much there yet as I’m trying to correct scripts to use BTW17 set, fighting with sentence piece at the moment as it does not accept BOS EOS tokens. Once I have a first LM trained I will publish the changes so that we can start collaborating.

How about we agree on a plan how to progress etc. Here is a proposal feel free to change it:
We develop:

  • a common validation & training set for normal text like Wikipedia
  • a common validation & training set for comments as @t-v noticed the language is different for tweet/comment and Wikipedia
    • I’m working on the btw17 - 170 MB of comments from Twitter (Should we add sb10k?)
  • a script to train a working model for sentiment analysis using sentence piece on the Germeval 2017

The above should give us a base-line then we plan a set of experiments to improve it and work on each experiment separately, sharing intermediate results in github issues and the improved values here.

The perplexity (that I compare to @t-v’s 32) was 38.

@aayushy For the perplexity to make sense we need to know the OOV number and the text you were working on. (If you have a lot of unknowns the perplexity goes down very quickly)

Yes, I read one of your other replies mentioning that German has a much richer vocabulary than English. I used 60,000 tokens, which would explain the perplexity among other things.

Hi,

as already mentioned by @t-v I also briefly looked at the Sb10k dataset but could not get any decent results (probably because the vocabulary from Wikipedia articles is fundamentally different from the vocabulary used in tweets and vice versa).

I would love to collaborate on your work to bring ULMFit to the German language, but I’m afraid that I won’t find the time anytime soon… :frowning:

Btw. @rother: I just realized that you have also submitted a paper to the GermEval shared task - will you be in Vienna tmrw. for the poster session?

Best,

Matthias

Hi,

yes I will be there for the entire workshop day. If you’re there it would be cool to chat :slight_smile:

1 Like

Maybe we should create a new metric like perplexity per 10k tokens or something.
Iirc I did some experiments with 80k tokens before (don’t know why I picked that number) and the perplexity was a good bit higher (which makes perfect sense). I think it might be a good idea to start with the tokenization of the second step (the unlabeled twitter data for example) and see how many tokens that produces and work backwards. Maybe do it for different media (forum posts, email, twitter) to get an empirical estimation of how many tokens are a good overall baseline for the wiki-model.

Also the better the token match between the wiki-model and the twitter/newspaper/whathaveu-model the better.

Edit: I’ll upload everything I did to github when I have some time (for twitter you are only allowed to upload the ids not the text but I’ll just share the collection script…which is not pretty ;P).

I like this initiative and will post my thoughts when I’m back from Vienna. My quick summary of my Germeval entry is that it was done a bit hasty and there’s tons of room for improvement. I have a piece of paper in my office with all the notes. I’m quite happy that I did everything end to end once to go through the entire process. Learned a lot, now we can optimize :stuck_out_tongue:

3 Likes

Ooops just checked and that sentence about posting when I’m done is from me. It wasn’t meant to imply that I want to hold anything back. Just that at that point I had done some literature research and supplied the findings as is and I guess I never updated the post because I didn’t research more literature :slight_smile:

I’ve also not updated my github repo with the Germeval code yet because it’s pretty ugly and I want to fix it first. No bad intention, mostly little time. I very much prefer open collaboration on these things.

Upon rereading I now realize it might sound like secrecy but that was not the intention at all. It’s more incompetence/lazyness on my part :rofl:

1 Like

If someone is willing to host the language model, I’ll gladly upload it somewhere (27.39 perplexity @ 50k tokens, probably some room to improve if ran for some extra epochs). I’m a little to embarrassed to have it officially hosted by Jeremy :slight_smile:
It’s very time consuming to do this step and it’s probably better to focus on the later steps and revisit the LM later. In retrospect I should have used the one Thomas linked somewhere, would probably have saved a lot of time but I wanted to do the entire process end to end once (it was a great learning experience)

The Twitter-CSV file is about 42MB but I think you’re not technically allowed to upload it anywhere legally (at least that’s what I was told). Maybe I could share it with a temporary link or something, but I’ll upload the collection script tomorrow if I find the time to clean it up a bit (you’ll need a Twitter account to run it). Iirc one can get the 300k I collected in a couple of hours. Still it would be great if we can build a large collection and not have to download everything individually. Maybe someone who knows the legal situation can chime in.

I’ll be in my office with access to the data and code on Monday.

Edit: moved the post here, accidentally posted it in the other thread

I’ve done that now. The original post author can click ‘make wiki’, fyi.

1 Like

Wouldn’t 10k validation set be too small?, Perplexity calculation of 350k tokens just run under 3 minutes. But in general assembling a data set or multiple data sets like: Wiki, Twitter, News , let say 100k words each could be a good start.
I think 3 different data sets would work better as the twitter language is very different. (about that later)

I didn’t quite get what you mean here. Do you want to tokenize to words or to subwords tokens? Do you want to know how many unique words can be found in each dataset? And how do you define good.

That make a lot of sense, have you checked for your words?

Can you add your repo to the wiki on the top?

How about we use shared google drive for that while we are experimenting?
Then we select the best models and push them to github (it has releases where you can upload lager files)

I think it is not a big deal if you do that in name of science, at least GDPR is quite relaxed about this. Maybe we simply get it stored in private google drive for time being when we run experiments?