Pre-trained language model for texts that are read backward

In Chapter 10, Jeremy says:

By training another model on all the texts read backward and averaging the predictions of those two models, we can even get to 95.1% accuracy, which was the state of the art introduced by the ULMFiT paper.

I’m assuming that AWD_LSTM is a pre-trained language model for texts that are read “forward”. Is there a pre-trained language model for texts that are read backward?

Or do we have to create one from scratch (to use the above averaging approach)?

Yup! If you pass backwards=True to TextBlock fastai will automatically grab the pretrained AWDLSTM for backwards

(As seen in my outdated notebook here: https://github.com/muellerzr/Practical-Deep-Learning-for-Coders-2.0/blob/d81ead621ff02218f201f4b65f4ad26cb0d7c192/Text%20Notebooks/01_Backwards_and_Forwards.ipynb, and in the docs for TextBlock here: https://docs.fast.ai/text.data.html#TransformBlock-for-text)

3 Likes

Wow that’s pretty cool!