I used below code to load fastai model as pytorch model.
But recently I upgraded my fastai version and it throws some error which I didn’t face in the past.
# these parameters aren’t used, but this is the easiest way to get a model
bptt, em_sz, nh, nl = 70, 400, 1150, 3
drop_out = np.array([0.4, 0.5, 0.05, 0.3, 0.4]) 0.5
drop_mult = 1.
dps = drop_out drop_mult
ps = [0.1]
ps = [dps] + ps
num_classes = 3 # this is the number of classes we want to predict
lin_ftrs = 
layer = [em_sz * 3] + lin_ftrs + [num_classes]
vs = len(self.tokenizer)
self.model = get_rnn_classifier(bptt, 20*70, num_classes, vs, emb_sz=em_sz, n_hid=nh, n_layers=nl, pad_token=1,
layers=layer, drops=ps, weight_p=dps, embed_p=dps, hidden_p=dps)
map_location=lambda storage, loc: storage))
RuntimeError: Error(s) in loading state_dict for SequentialRNN:
size mismatch for 0.encoder.weight: copying a param of torch.Size([5999, 400]) from checkpoint, where the shape is torch.Size([3699, 400]) in current model.
size mismatch for 0.encoder_dp.emb.weight: copying a param of torch.Size([5999, 400]) from checkpoint, where the shape is torch.Size([3699, 400]) in current model.
I could make sense of the error, But I don’t know how to tackle this error.
Help is appreciated, Thanks