YangL
(YangLu)
1
So if I’m not mistaken, this is my understanding:
use the same vocabulary and everything, but flip the data around, so that you have anther new set of data, and model&encoder. OKAY.
Then I use the new model and train my classifier on that, and get some predictions.
… and I combine the prediction of original classifier and the new classifier to get better result?
Am I getting this right/wrong?
1 Like
jeremy
(Jeremy Howard)
2
That’s right. You just average the predictions of the two models.
3 Likes
pandeyanil
(Anil Kumar Pandey)
3
Ensemble forward and backward prediction
2 Likes
Vishucyrus
(Vishal Pandey)
4
What exactly do you mean by the predictions… (I am getting little confused…)
keratin
(Arnav)
5
The probability vectors of size vocab_length
for the next word (predicted word).
2 Likes
You just average the predictions of the two models.
I think “predictions” means the output of the classifiers, not
The probability vectors of size vocab_length
for the next word (predicted word).
(Is it possible to ensemble the result of the two language models?)
ankit1996
(Ankit Kumar Singh)
7
What do you mean by it, please elaborate backward prediction