What does the encoder actually learn? šŸ¤”

Lovely! Thanks man.

1 Like

Did you try using both arccos and cosine sim and find any improvement with arccos?

Yeah I did, there wasnā€™t any improvement.

1 Like

Okā€¦I have some news. After ditching all other methods, I found some time to work on the FitLam based model.

The kaggle Quora duplicates winner got log loss 0.11 after ensembling a gazillion models and feature engg.

Our FitLam single model with just a bit of training gives 0.19 straight out of the box! Holy cow!!

Thereā€™s still bidir, concat pooling and other stuff try! So when @jeremy and Sebastian say that FitLam is akin to alexnet, for NLPā€¦itā€™s not to be taken lightly!

Notebook: https://github.com/arvind-cp/fastai/blob/arvind-cp-LM-eval/courses/dl2/FiTLAM%20-%20Quora%20Duplicate%20Pairs%20-%20STS%20task.ipynb

5 Likes

Note: the default methods in the fast.ai stepper class donā€™t allow for input X pairs/lists of unequal lengths. So I had to make a minor edits. let me know if you need more info.

1 Like

Iā€™d be interested to hear what you did here.

1 Like

I actually took care of the bulk of it in my Pairdataset, if you see the notebook, under the section: Create dataloader for Quora classifier.

So in the stepper class, I modified the step and evaluate methods where self.m is called.
If len(xs) > 1 pass [xs] else pass xs to the model.

BTW, the validation set accuracy is 98.11% which I didnā€™t include in the notebook.

1 Like

And a question Iā€™ve had in the back of my mind for a while now:

Why only 1 backbone?

Whatā€™s stopping us from having multiple FitLam backbones and let a custom head use an attention mechanism to ā€œlearnā€ how to effectively deal with them.

Then you can do:
learn[0][0].load(ā€˜wikitext103ā€™)
learn[0][1].load(ā€˜imdbā€™)
learn[0][2].load(ā€˜quoraā€™)

Itā€™s sort of like how they load the jiujitsu program into neoā€™s head in the matrix.

Are there any papers/projects where this is shown to work/not-work?

And finally, I was finding that slowly reducing BPTT from 70, the source model(wikitext103) down to 20, (mean + std.dev of Quora question lengths) helped with the training greatly. It was just a fluke perhaps, but it seemed like an intuitive thing to try - the LM model works harder to predict next word with a shorter input sequenceā€¦+ I was mainly trying to get bigger batches into the GPU. So, I am calling this bptt annealing.
Not sure if this is also worth pursuing, and if it has a proper name in the literature.

Seeking feedback from experts and @jeremy.
Thanks!

3 Likes

Thatā€™s awesome! I havenā€™t had much luck with freezing the encoder and using the last hidden state as input, with our without LM finetuning. Glad to know that end-to-end finetuning works!

1 Like

Thanks @rudraksh . Iā€™m not sure I follow what you meant.

Iā€™m hoping to try out a few more tricks, see if we can get to 0.11 log loss and then move on to the universal sentence encoder based backbone. Please let me know how we can collaborate.

1 Like

Awesomeā€¦

1 Like

So, in order to facilitate a fair comparison with USE, I froze the LM backbone and passed both the questions individually through the encoder, taking the last hidden state as the question embedding. These embeddings were then concatenated and an MLP was trained to output whether they are duplicate or not. Unfortunately, this did not work well, and I wasnā€™t able to reduce neg log loss on the validation set below 0.6. I then tried fine tuning the LM backbone on all the questions and repeating the above procedure, in this case, the neg log loss plateaued around 0.5

In retrospect, assuming that the last hidden state captures all the semantics of the sentence seems kinda naive. Maybe all the hidden states at each timestep need to be combined (via an attention mechanism?) in order to get the question embedding.

1 Like

Yesā€¦all states could add value here. But even I took the last hidden state from the FitLam encoder.
Also, I didnā€™t do cosine similarity. I just spit out the last layer neuron into the BCELogloss.

Finally, I did unfreeze and train e2e to get good results. Could you please try that?

1 Like

The whole point of LM fine tuning is the fine tuning. So Iā€™m not sure how useful a comparison it is to freeze the LM backbone!

I did try fine-tuning the LM backbone but unfortunately, that didnā€™t really help it outperform USE defaults. (0.5 nll vs 0.3). End-to-end finetuning as @narvind2003 suggested definitely gave FitLaM the edge and I was able to reach 0.25. Possibly we can incorporate multi-task learning and try training on datasets other than wt103 to improve upon the pretrained weights of FitLaM, as suggested in your paper.

1 Like

You mean pretrained weights, correct?

I asked the Tensorflow hub team what data they used to pretrain their transformer model. I didnā€™t get a clear response. One if the reasons I think it works so well is the quality & volume of data they have used. Thatā€™s the hunch anyway and we need to benchmark the transformer vs awd-lstm backbones to really find out.

More than volume I think diversity in datasets and training objectives is the key to their good out of the box performance.

Yes, corrected :slight_smile:

@narvind2003 Dude, I think thereā€™s some data leakage between your training and validation set. In order to make the model insensitive to a specific question order in a pair, you basically duplicate your labelled data and switch the question order. Ideally you should split your data into training and validation set before this duplication business and not after.

1 Like