ULMFit - Italian - v1

(Davide Boschetto) #1

Hi there!
Let’s open this thread… The original assignee for Italian has not been active since June of last year, so I was thinking about opening a new thread and possibly using fastai_v1 to create something for the italian language!

Dumping itwiki from wikimedia still is the first required step, followed by json conversion using wikiextractor. This is what I remember for now: I’ll see where to go next!

There are some problems (forced to use Windows, most ULMfit code is written for 0.7 that requires torch 0.3.1 that has trouble with windows), but I think they are easily solvable given the right amount of time!

Hopefully someone else will jump in :slight_smile:

2 Likes

Language Model Zoo :gorilla:
(Andrea de Luca) #2

I tried to build a viable ULMFit for Italian, without being able to succeed, though.

Let me try and find the my old code… Meanwhile, let us stay in touch…

Thanks!

2 Likes

(Fabrizio) #3

Ciao Davide, better wait for the part 2 v3 and see what is coming. Training on Wikipedia is just the first step, and quite easy to do. However you still need dataset in Italian to validate your work. This last point is a little bit trickier. If you want, let me know about your plans.

2 Likes

(Antonio Lisi) #4

Hi guys,
if you’re interested we can try to do it before the second part of the course, I want to go deeper in the fastai code and this can be a way to build something useful at the same time. I’m available on weekends and early mornings or late nights during the week, if you’re in Bologna we can even meet in person on the weekends.

P.s. I usually use AWS in order to have a good GPU, I only have an old 1050Ti for testing code at home.

1 Like

(Nicola Jean) #5

Hi lads, have u already setup a github repo for this? How do you plan to proceed? At work I am forced to run on windows but fortunately we managed to get the new TITAN RTX (24 Gb RAM and fp16 enabled) up and running on 3 workstations. I do not run model calibration every day so I might have some spare computing capacity for this project. Cheerio

0 Likes

(Davide Boschetto) #6

Hi there!
I’m a bit busy with personal things at the moment (getting married in a few weeks), so ULMFit lost positions in the todo list for these months…
I’m also not on par with the new Part2, so honestly I’m not sure if new advancements have been made… It seems so from my twitter feed, but I can’t say it for sure!

If anybody is interested, create a repo if you see it’s not there!

0 Likes

#7

I know one postdoc who is working on this for sure because we had a couple of technical chats a few months ago and his project is supported by a university. It is probable he will try and get a paper published in the coming future. As for this group, any people applying fastai techniques to their native domain? That would make this thread much more interesting imho. Ciao, belli.

0 Likes

(Francesco Gianferrari Pini) #8

Hi, we just released a working Italian language model here.
https://github.com/Quantyca/deepitalian @pietro.latorre @angioia .

Many thanks to @tomsthom for sharing his great deepfrench to which we based our work.

9 Likes

(Nicola Jean) #9

has anyone tried the BPE + ULMFit approach? something similar to the one reported in https://twitter.com/misterkardas/status/1032286725622702080?lang=en and https://github.com/n-waves/ulmfit4de … it sounds to me that for the italian language it could help in massively reducing the dictionary size and cope better with unknown words…

0 Likes