What does the encoder actually learn? šŸ¤”

Thatā€™s awesome! I havenā€™t had much luck with freezing the encoder and using the last hidden state as input, with our without LM finetuning. Glad to know that end-to-end finetuning works!

1 Like

Thanks @rudraksh . Iā€™m not sure I follow what you meant.

Iā€™m hoping to try out a few more tricks, see if we can get to 0.11 log loss and then move on to the universal sentence encoder based backbone. Please let me know how we can collaborate.

1 Like

Awesomeā€¦

1 Like

So, in order to facilitate a fair comparison with USE, I froze the LM backbone and passed both the questions individually through the encoder, taking the last hidden state as the question embedding. These embeddings were then concatenated and an MLP was trained to output whether they are duplicate or not. Unfortunately, this did not work well, and I wasnā€™t able to reduce neg log loss on the validation set below 0.6. I then tried fine tuning the LM backbone on all the questions and repeating the above procedure, in this case, the neg log loss plateaued around 0.5

In retrospect, assuming that the last hidden state captures all the semantics of the sentence seems kinda naive. Maybe all the hidden states at each timestep need to be combined (via an attention mechanism?) in order to get the question embedding.

1 Like

Yesā€¦all states could add value here. But even I took the last hidden state from the FitLam encoder.
Also, I didnā€™t do cosine similarity. I just spit out the last layer neuron into the BCELogloss.

Finally, I did unfreeze and train e2e to get good results. Could you please try that?

1 Like

The whole point of LM fine tuning is the fine tuning. So Iā€™m not sure how useful a comparison it is to freeze the LM backbone!

I did try fine-tuning the LM backbone but unfortunately, that didnā€™t really help it outperform USE defaults. (0.5 nll vs 0.3). End-to-end finetuning as @narvind2003 suggested definitely gave FitLaM the edge and I was able to reach 0.25. Possibly we can incorporate multi-task learning and try training on datasets other than wt103 to improve upon the pretrained weights of FitLaM, as suggested in your paper.

1 Like

You mean pretrained weights, correct?

I asked the Tensorflow hub team what data they used to pretrain their transformer model. I didnā€™t get a clear response. One if the reasons I think it works so well is the quality & volume of data they have used. Thatā€™s the hunch anyway and we need to benchmark the transformer vs awd-lstm backbones to really find out.

More than volume I think diversity in datasets and training objectives is the key to their good out of the box performance.

Yes, corrected :slight_smile:

@narvind2003 Dude, I think thereā€™s some data leakage between your training and validation set. In order to make the model insensitive to a specific question order in a pair, you basically duplicate your labelled data and switch the question order. Ideally you should split your data into training and validation set before this duplication business and not after.

1 Like

Oh ok. Did that change make a difference in your training?

I wasnā€™t able to get a nll below 0.34 on the validation set, although my implementation is slightly different from yours.

1 Like

I tweaked the dropouts quite a bit to avoid overfitting. I could get trn nll 0.074 and val nll 0.154. had to use extremely low alpha and the needle barely moved after training overnight on volta100.
Note: this is the exact train-val splits and fitlam backed model Iā€™ve been using so far. Kaggle is throwing a submission error - I could submit yesterday though - says competition is inactive.

Hope it lets me submit againā€¦in the meantime, let me perform the ā€œhuman evaluationā€ of the semantic understanding of these models - the original thing I set out to do.

1 Like

Yeah, or maybe itā€™s because I only did one epoch of LM training. Iā€™m also only able to use 30 bptt and batch sizes of 16 due to GPU limitation.

Iā€™ll try increasing batch size and pretraining epochs of lm to see if the performance improves.

1 Like

Iā€™m confused why Delip Rao says this. Can anyone clarify please?

Check out @delipraoā€™s Tweet: https://twitter.com/deliprao/status/992583524812115969?s=09

Update: I asked Jeremy who kindly clarified on the same tweet thread. Thanks.

I finally found what I was looking for!!!
In the MERLIN talk by Greg Wayne et al @ deepmind, he mentions this idea of cutting down BPTT to smaller chunks to assign credits to shorter time intervals.
Link - watch at time 34:36
This is in a reinforcement learning setting, where the agent moves around the world and thinks about which events in the recent past have most effect on what itā€™s seeing/doing currently. Using truncated BPTT, they donā€™t have to look too much back in time. And it makes intuitve senseā€¦ like if your stomach is upset, itā€™s probably what you ate a while ago, not what you ate 2 days ago.

This is the first time something which had struck me while I was working on a model became clearer when watching a totally different talkā€¦even though it doesnā€™t work quite reliably in NLP
Mind = Blown!

3 Likes

Finally answered: https://arxiv.org/abs/1806.02847

2 Likes

Hello,

I get name error for;

"kk0=m[0](V(T([X[0]])))"

part of your code for V as;
NameError: name ā€˜Vā€™ is not defined

Would say where can I import this function ?

Thanks