Lesson 10 testing the NLP text classifier by loading pretrained model

After following the steps as in ‘imdb.ipynb’ I would like to use the network to test new inputs that don’t have any labels.
Since I would like to use it as standalone, I am thinking of reusing the last model.
I saw that the way to test it is

However the question is how do I load the data, and the model.
Data, I have passed tokenization step etc. However to define m ( m = get_rnn_classifer(bptt, 2070, c, vs, emb_sz=em_sz, n_hid=nh, n_layers=nl, pad_token=1,
3, 50, c], drops=[dps[4], 0.1],
dropouti=dps[0], wdrop=dps[1], dropoute=dps[2], dropouth=dps[3])), I need c which depends on the labels

When I try to use the pretrained function as for other types of networks, I get error 'AttributeError: type object ‘RNN_Learner’ has no attribute ‘pretrained’

learn = RNN_Learner.pretrained(md, TextModel(to_gpu(m)), preCompute = True)

I wonder what is the way to save a pretrained language model, and then test it with new input. Since in my case test input will change, I don’t want to load it initially when I do ‘learn = RNN_Learner(md, TextModel(to_gpu(m)), opt_fn=opt_fn)’ to train the network.

I wonder if it is sufficient just to load ‘clas_2’, which was saved at the last stage, or I need to define m (get_rnn_classifier) , then define learn (RNN_Learner), etc.

Or I need to load the weights as it is done for loading wikitext103 model (wgts = torch.load(PRE_LM_PATH, map_location=lambda storage, loc: storage)).

1 Like