A simple request

(Esteban J Guillen) #1

@jeremy I know the current deep learning course offering is only available to current students, but I was wondering if you could make an exception?

I am a PhD student at the University of New Mexico (UNM) and have a milestone project coming due at the end of November (and oral presentation the following week). I am using the ULMFit framework to show how transfer learning can be applied to a small (<2000 labeled examples) medical text dataset. I am getting good results, accuracy has gone up from 84% (previous approach) to 92% with ULMFit. It would be really great if you could send me the link to the lecture on IMDb for V3 of the deep learning course. Watching your lecture would help me get the most out of the fast.ai library and greatly help in preparing my oral presentation.

If you can make an exception you can send me the link at:

ejguill@unm.edu

And if you can’t then I understand.

Thanks,

Esteban Guillen

0 Likes

(Arunoda Susiripala) #2

Hey, usually we don’t at Jeremy in this forum unless it’s urgent.
I’m not sure it’s possible to share the video or not.
But you can see this notebook: https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson3-imdb.ipynb

He didn’t go deeper into ULMFit and frankly he barely talked about that.
But this might help you: Language Model Zoo 🦍

0 Likes

(Esteban J Guillen) #3

No worries and thanks for pointing me to the notebook.

0 Likes

(魏璎珞) #4

If no arrangement can be made, you can have my seat for $5000.:sunglasses:

0 Likes

(Esteban J Guillen) #5

Thanks for the offer, but it sounds like ULMFit hasn’t been talked about that much in the class so far. I will just look at previously published material.

0 Likes

(Cedric Chee) #6

Hi Esteban.

Yes, Jeremy didn’t talk much about that, I think because it will be covered in much deeper in the advanced part of the course (maybe the next version of part 2).

I am one of the contributor to the Language Model Zoo for Malay language:

You can learn more about ULMFiT in lesson 4 (basic) and lesson 10 (deep dive into AWD-LSTM network, language modelling, fine-tuning, and more).

If you prefer to read, you can check my lesson 10 notes. The exact part where Jeremy started to talked about:

  • NLP
  • fastai.text module
  • IMDB language modelling with fastai.text API
  • pre-training on Wikipedia dataset
  • fine-tuning
  • … and so on and so forth

Although my work is done for ULMFiT using fastai v0.7.0 (the older version), it does not ends there. If you prefer to use fastai v1 for your work, here’s a summary of what’s going on (not trying to be accurate). You can join and follow along.

Many of us are still working to get our ULMFiT language notebooks updated to fastai v1:

We have ironed out many issues while trying to adapt to fastai v1.

Sebastian Ruder, Piotr, Charin, and a few of us are working:

  • to get ULMFiT updated to use fastai v1
  • training a new ULMFiT using AWD-QRNN (trying to show this is more efficient)

You can see the dev progress in this thread:

We believe we can get a better result if we use fastai v1.

In addition, with Google’s multilingual BERT released, I think it will be interesting to compare ULMFiT against BERT.

3 Likes

(Piotr Czapla) #7

@Esteban as @cedric pointed out we are trying to create a multilingual version of ulmfit ported to v1.0, the script are working but the accuracy is inferior to the previous work of Jeremy. I’m looking into that since some time and I think it is due to different tokenization we used (to follow xnli task). So if you are after high accuracy stay with fastai 0.7 for the time being. Once we have the issues fixed in v1.0 I will everyone know in Multilingual ULMFiT so simply start watching this channel.

On the other-hand if you want to help out with the tuning a multilingual version of ulmfit for fastai v1.0 let me know.

1 Like