NLP inference worsens after loading saved model

In Chapter 10, we fine-tune a language model on the IMDB dataset. However, since training the model for 11 epochs takes a fair bit of time, I decided to train the model on a faster, more expensive NB server. Ran inference on it using learn.predict and the outputs look great.

i disliked this movie because i never really saw it . The story is stale , the score is barely original , the acting is seen as cheap , the story is ridiculous . The scene where the old man is driving the
i disliked this movie because of the over - the - top acting , and the basic plot development . Poor production values , weak screenplay , poor acting , dull cinematography and inane talk . i guess i should know that this is

Then I saved the model using and downloaded it to my usual free instance. However, now when I initialise my learner and load the model in the new instance and execute learn.predict, the outputs aren’t good at all and also full of \n predictions:

i disliked this movie because . but i have to why that i

not sugar at him death the movie . her … as i had the ago to really it , He well not civilization , The see

other .
i disliked this movie because , It is a thing movie . i SEEN it . it is a brings Comedy , It is a other movie . And the then His where changes , He wright that

I guess I’m losing something here when I am loading my saved model into a fresh learner. What can I do (either during saving or loading) so that I can bring my learner back to the state immediately after training where the predictions were good?

Edit: Should I use fine_tune the model for an epoch? Or maybe freeze followed by fit_one_cycle for one epoch?


bump bump!