Since lecture 4 I’ve struggled with a never-ending training of IMDB notebook. Fortunately, I got some pre-trained weights (thanks to @Moody and @wgpubs) thus I thought of creating this post to allow fellow students to explore the notebook since many of us have skipped it because of training time.
The modified notebook doesn’t include the required code to train learner for Language Modeling. Instead, below code is used to load pre-trained model:
learner.load(‘imdb_adam3_c1_cl10_cyc_0’)
Next, you may train model for the sentiment which is approx 70 minutes long job or skip this step and use pre-trained weights for that part as well.
m3.load_cycle(‘imdb2x’, 4)
Personally, I recommend using weights for Language modelling and training yourself the sentiment part. Using pre-trained model saves time which can be used to explore the pieces which otherwise are skipped due to ever running scripts.
Pretrained weights can be downloaded from here.

I linked this post to