Multilingual ULMFiT

Hello,

I am doing the fastai course (2019), and I am new on this forum. I am interested in applying fastai to NLP, in the dutch language.
I looked at the pretrained models available in ulmfit-multilingual (pretrained_lm_models.zip) but it does not contain dutch.
Is there a pretrained dutch model available?

@JoepJ: You could try the language model I trained on a Dutch Wikipedia corpus for a couple of days.

Let me know how it worked out for you and whether you need any help. Good luck!

2 Likes

Thanks @benjaminvdb!
That saves a lot of time and effort :smile:.
I will check it out.

I would like to contribute for Bangla Language. Can someone give me a headstart? Are there any instructions to make the wiki dataset? Would be very helpful. Thanks. Also looking forward to using sentence piece.

The contact person listed in the ULMFiT for Bangla seems to be inactive for over an year. Is there anyone actually working on it?
Also I found this project in the wild. Has a wikipedia Bangla corpus; didnā€™t get the opportunity to check it out, might be useful to you.

Iā€™m also trying to find a way to use wikipedia data dumps. Iā€™ll share the dataset if I manage to do something.

I havent found anyone else working on Bangla. I am currently working on it.
I have actually checked out the project you mentioned. The dataset seems small. So I was thinking of building a larger dataset.

Here are the data dumps: https://archive.org/search.php?query=bnwiki&and[]=year%3A"2019"

Which platform are you working on?
Both kaggle kernels are Colab time out even before they finish training on IMDB example.

i was working on colab. where can i find the imdb dataset you are refering to?

The one in Lesson 3 video. Colab shows me 56 hours eta on training.
This one.

Are you sure you turned on GPU?

Facepalm
No I didnā€™t. Thanks.
In my defense though, I prefer kaggle over colab.

Anyways, Iā€™ll try to find the best database files from the dumpster and try to get it into a reasonable file format.

Time to create a ULMFiT - Bangla Thread :smiley:

I have completed ulmfit already using other texts instead of wikipedia. Was wondering about the effects of wikipedia texts.

Impressive. Then all the more reason we should open a ULMFiT-Bangla thread, please do the honors.

Then you can point me in the right direction. Where do I start? I read somewhere about an Indian language project, could you provide the link?

Sure will open a thread a soon. Not sure about the indian language project. Ive followed jeremys classes. Didnt use any separate language specific tokenization.

Right. Please keep me in the loop when you do.
Cheers.

Hello everyone! I am working as a researcher at the Turku University Hospital. We have quite nice GPU resources here and I have trained a Finnish ULMFiT model on the Finnish wikipedia using the n-waves scripts. I reached a perplexity of about 23.6. Iā€™ll double check with the employer if I can just open source the model, vocabulary and an example classification done on open data (city of Turku feedback classification with a few specific classes)

What is the best way of sharing the model and stuff if I get the green light? I can put them on github, but is there some other model zoo or something where the different models are more easily accessible?

2 Likes

In case anyone is interested, here is a link to a finnish model trained on wikipedia with n-waves, got a validation perplexity of about 23.8:

A notebook to make a classifier is also included! Maybe that could be helpful for others too, who would like to use the pretrained models for experiments. At least the config n_hid=1150 thing caused me to lose a few hairsā€¦

3 Likes

First of all thanks for making the Dutch language model available! It helps greatly. I was searching to find a Dutch dataset that could be used for bench marking but couldnā€™t find any. For German I came across http://www.spinningbytes.com/resources/. Was wondering if you know of any Dutch datasets?

Hi James! Iā€™m glad to hear the Dutch language model was of use to you.

Do you mean with benchmarking the performance of the language model on downstream tasks e.g. a classification? Iā€™ve created a dataset for this purpose, the 110k Dutch Book Review Dataset (110kDBRD). You should be able to get around 94% accuracy on the out-of-the-box dataset.

Thanks Benjamin for creating the Book review dataset and sharing it, great work! Actually I was looking for a public dataset that is used in an academic paper. Anyways, it doesnā€™t matter much. I used the Dutch language model, and I tried on a classification task - with around 600 samples per category I am getting close to 90% accuracy :slight_smile: