ULMFiT Vs Glove Vs Word2Vec

How is ULMFiT different from GLoVe or Word2Vec or for that matter FastText.
It would be nice if someone can list out the difference among those.
Thanks

GLoVe and Word2Vec are just word vectors - ie the embedding layer of a model. FastText is a library that provides word vectors.

ULMFiT is a model training technique consisting of three stages:
1. training a language model on a large, general corpus
2. Taking that pretrained language model and fine tuning it on a specific corpus
3. Taking the fine tuned language model and training a classification model
In all steps of ULMFiT, most of the model (embeddings and LSTM layers) are transferred between steps.

6 Likes

I thought all of them create word embeddings that are used in downstream tasks.
But what is confusing to me is how they are different.
To put it other way, can I use word embeddings generated by ULMFiT in a CNN task(
I planning to create a 2D image input by lining word vectors in a row).

I’m struggling to understand your question. Word vectors are a bunch of vectors representing words. ULMFiT is a process for training a text classification model. The models used in ULMFiT include word embeddings but that doesn’t make ULMFiT a word embedding. It’s like you’re asking how a steering wheel is different from driving a car.

Also what’s the intuition behind trying to create an “image input” with word vectors? Are you thinking of putting it through an image type CNN with 2D convolutions? Why that instead of using a 1D convolution?

1 Like

Sorry to confuse you. Thank for the clear explanation, I think I now get it.
I was looking at ULMFiT purely in terms of word embedding. So it similar to resnet(34,50…), wherein it’s not just initial weights but also a specified architecture.

For the 2nd part, I am planning to use CNN and feed in a input sentence, each word(embedding) of my sentence will form a row of the 2D input.

That was a good question - thanks for asking, Sathya.

Hi everybody,

Well, similarly to Glove and Word2Vec, ULMFIT creates its own word embeddings.
Rather, the difference between Word2Vec and Glove, on one side, and ULMFIT, on the other is how such word embeddings are built.

While training the pre-trained language model (meaning the first stage) ULMFIT tries to predict the next character, whereas Word2Vec tries to predict the word in the middle of each sentence. The idea behind it being to understand the context of other words/tokens a word is surrounded by.

In other words, the two models differ in the way they try to train the language model.

Hopefully, this will help.

1 Like

One more thing to add: the final step of ULMFiT is not just for classification. You can train a fine-tuned language model on other NLP tasks as well.