What does the encoder actually learn? 🤔

Hey, are you using the latest fastai version?
can you try with fastai version 0.7 please?

Thanks for answer. I am using 1.0 and I think I figure out this as;

resizeLength = len(sentenceTrimmed)
inputSentence = tensor(sentenceTrimmed).resize_(resizeLength,1)
# sentence encoding 400 dims. -1 is the last element that’s supposed to have the final encoded state
tmpEmbded = m[0](inputSentence.cuda())

I am also trying to convert your LM Evaluation notebook to fast.ai 1.0 and will share it by citing you if it is ok for you.

1 Like

Awesome. Sure, thanks.

And it doesn’t go well of course :slight_smile:

I can calculate distances on a pretrained model but they are wrong even for simplest words. Does anyone have an idea what could go wrong ?

One mistake that I made might be using LM encoder but even after I fixed it similarities are still not right even for words. For example word woman seems more similar to king than queen.

woman with king
0.4858151376247406
woman with man
0.8090812563896179
woman with woman
1.0
woman with queen
0.45923975110054016

Hi @Serbulent , were you able to make any improvements ?.

One more paper that’s highly relevant: