ULMFiT - German


(Johannes Lackner) #81

Piotr,
Thank your for these helpful pointers! I am definitely looking forward to seeing
The Model Zoo populated! :wink:
Can’t wait to check out the QRNN pretrained LMs you generously pointed to, not sure yet if I know how to surmount the CUDA obstacles you are indicating, but will try (I have a GCP account as well).

All the Best from Geneva, keep up the great work


(Piotr Czapla) #82

There is a chance it will work out of the box on Collab, the issue is caused by Cuda 9 not supporting the newest gcc. To fix it I had to install gcc version 5.


(Johannes Lackner) #83

Yes, of course, I will, as soon as I have something more to show. Colab just finished the first learning rate search (which is further than I got with the other pretrained models, thank you @jyr1).



(Kristian Rother) #84

Quick sanity check Piotr. I’m currently rebuilding my old model and it takes a lot longer than previously. How long did it take you to train 1 epoch for the German LM with fast.ai v1 (and on what hardware)?
It currently takes me 15h and previously it was 2-3h. Would appreciate a quick “nope, we trained one epoch quite quickly” so that I know I can keep searching for the cause :smiley:


(Piotr Czapla) #85

~1h for qrnn and 2h lstm, all 4layers on 1080 TI. You can check the logs here, some of them have the training time as Sylvain added that to the progress bar.
Most of the training was done either on 1080ti or v100


(Lisa) #86

Thank you all for sharing!

I’m now using the Wiki-Model, this fixed my problem. By the way, I’m doing an evaluation of Statustexts by the way.

Thank you all for your help!!!

Tschüss :slight_smile:


#87

Hi All,

I have used the pre-trained model from @jyr1 on the datasets from here GermEval-2018-Data.
The dataset contains 5009 tweets as train-set and ~3300 as test set.
The model achieved between 66-70% which is random as the labels are in the ratio 2:1(OTHER:OFFENSE).
Could any of you post something If your model achieved better result?
As written earlier by @jyr1 that his model achieved 93% accuracy on Amazon Review dataset. Could you(@jyr1) apply your model on the dataset link given above and tell us if the model has achieve such a high accuracy even on Twitter dataset?
It will be great then!!
:slight_smile:


#88

I quicky summarize the results. The goal was to compare the ULMFiT sample efficiency to other methods. Howard and Ruder call the ULMFiT method “extremly” sample-efficient in their paper. I’ve got different results for the 10kGNAD.

To evaluate the sample sample-efficiency I trained ten models for nine subset sizes raging from 1% to 100%. I report the average error rate for the fastText library, a Support Vector Machine (SVM), a TensorFlow NN and the ULMFiT method using sub-word tokeniziation.

For the smaller subsets the TensorFlow NN has the highest sample-efficiency, for the larger subsets starting from 10% the SVM outperforms. The ULMFiT method has a higher sample-efficiency only on the 5% subset. I can’t say that the ULMFiT method is “extremly” sample-efficient on the 10kGNAD.

Keep in mind that I was quite limited in terms of GPU power, so someone might be able to find better hyperparameters than I did. Additionally experiments on one dataset are hardly representative for the german language or other languages.

I share my scripts here.


#89

I didn’t share a classifier, only a pretrained model. You’d still have to finetune it and actually create the classifier. Or did you do this? Not entirely clear to me. Applying the classifier I trained on Amazon data doesn’t make sense, as it would distinguish negative from positive, but not offense from no offense (you can be very negative but not offensive, for instance).


#90

Hi @jyr1,
So you pretained the model on Amazon reviews. Right?