[SOLVED] [ULMFit] Weird Similarity Score from Embedding of Pretrained AWD_LSTM

[UPDATE] see comment below for the address

I am trying to extract the pretrained embedding from AWD_LSTM and check out some of its word representation. For example, in the embedding space I want to see how similar two words (e.g. “bad” and “horrible”) are and how dis-similar two words (e.g. “bad” v.s. “nice”) are.

I am not sure I did some steps wrong, but currently the result I got is pretty weird:

  • the cosine similarity for “bad” v.s. “horrible” is ~0.335171
  • “bad” v.s. “nice” is ~0.327662

It seems the embedding doesn’t work as I expect as two semantically opposite words have almost the same score as two semantically similar words.

You can find my simple code here, glad if somebody could point me out what’s wrong!

Hi!

There are a few reasons for that. First of all, the embeddings in the embedding layer of AWD-LSTM are not trained to reflect semantic similarity, which is the objection in traditional embeddings like word2vec or GloVe - this is just a nice side effect of LM training. Second, it is a well-known phenomenon that embeddings for antonyms (words having opposite meaning) are quite close in the embedding space. This is very intuitive if you think of it - antonyms like good and bad or love and hate have very similar distributions!

Anyway, despite all these issues, ULMFiT first-layer embeddings are quite good. For example, if you compute inner product instead of cosine similarity, you will see that king and queen are much more similar than king and tomato. See this colab.

The power of AWD-LSTM is that you can extract contextualized word representations, where the vectors for words depend on the context. They perform worse than BERT (I performed some experiments), but again, embeddings are not the core idea of ULMFiT.

EDIT: was wrong about normalization here, removed that point not to confuse people :slight_smile:

1 Like

@noisefield
appreciate your prompt reply!

I tried more examples with cosine similarity (e.g. “good” v.s. “sun”) and the results makes more sense now if I look at them in a relative sense (i.e. “good” vs “sun” ~0.046 while “good” vs “nice” ~0.358). The result aligns when I changed to inner product!

Knowing that the embedding is not specifically trained for semantic similarity in this case, I think the embedding is already doing a decent job.

btw, I have updated my notebook, you can view it here for your curiosity: https://github.com/riven314/ULMFit-IMDB/blob/master/visualize_embedding.ipynb

[EDIT]
for those who read this thread, words with opposite meaning (e.g. queen vs king, good vs bad) should receive high similarity score if the word embedding is working nice. If you want to test it out, you can verify that with a known embedding specifically trained on semantic similarity (e.g. those embedding in magnitude repo)

1 Like