Troubleshooting word salad output of text generator

I trained an OK text generator (following the IMDB notebook) and was working on putting it into a web app when I noticed that the results it was giving me had suddenly gotten significantly worse. Trying to figure out where I went wrong eventually led me back to going through the lesson3-imdb notebook again, and I’m getting similar results there. After running the notebook (not changing anything except reducing the batch size), my results for

TEXT = "i liked this movie because"
N_WORDS = 40
print("\n".join(learn.predict(TEXT, N_WORDS, temperature=0.75) for _ in range(N_SENTENCES)))


i liked this movie because crucible mowgli dispirited noodles pastiche screws brunettes kornman theoretical typical televisions firewood stupid baggage glitz humanities imbibing reflex t.k. spirit jitterbug lately opting conquistadores gurning engagingly atlanteans copout seaview unloved hud nebbishy counterculture elsa clear enrichment drearily aida vampyre gregarious
i liked this movie because thirteenth pankaj graphics thinking frist redbox vegeta lilith rappin dancefloor fresh squeezed thomsett everet paridiso earnestly yaphet hayakawa margulies calvinist childlike rousing inescapably yumi bagman argosy goku flows child- heesters windswept grieves recommendation changing conundrums unique disorientated homogeneous fatigue organist

In contrast, the output the notebook came with was:

i liked this movie because it was clearly a movie . xxmaj so i gave it a 2 out of 10 . xxmaj so , just say something . xxbos xxmaj this is a really stunning picture , light years off of the late xxmaj
i liked this movie because it would be a good one for those who like deep psychological and drama and you can go see this movie if you like a little slow motion and some magic should n’t be there . i would give it

The main differences that stand out to me are the lack of grammatical structure in my output and the lack of punctuation and special tokens. I didn’t get errors while running the notebook, and the language model vocab was the same, but clearly something is wrong.

Does anyone have an idea of what is going on here or where to look next?
Thanks!! :grimacing: :

Update: “fixed” this by downgrading to 1.0.37

For the benefit of others, since I struggled a bit to downgrade, here’s the command to use:

conda install -c fastai fastai==1.0.37

Just doing conda install fastai=1.0.37 won’t work.