Testing a Language Model

Hello! I am trying to build a language model, similar to the imdb lecture using a awd-lstm (and fastai.text). I trained it and saved the weights. So, in order to load it I do this:

learner.load('lm_parameters')
m=learner.model

But I am not sure how to test it on actual text. I would like something like m.predict("add text here"). I look a bit through documentation but I didn’t find a predict function. Can someone help me with this (or tell me where in the documentation I can find my answer)? Thank you!

I think your looking. For ‘learner.predict()’
https://docs.fast.ai/text.learner.html#LanguageLearner.predict

I tried it, but I am not sure what should I pass to it in this case. Normally, I pass the thing I trained on, but here I am not sure how long a sequence can I pass. Should it be the same size as my bptt?

You can pass in any length string you want. It will be tokenized and then using that model state it will proceed to generate whatever length of text you request.
Remember that the quality might not be that great. You have an accuracy of something like 30% with a well trained model. So your “guess” at the next word has a 1/3 chance of being right. Is that helpful?

I think the key thing is to pass learner.predict a few words that are characteristic of the corpus you used for fine tuning. You will then be able to get a sense of whether the generated text is consistent with what you might see in your corpus.