I used below code to load fastai model as pytorch model.

But recently I upgraded my fastai version and it throws some error which I didn’t face in the past.

# these parameters aren’t used, but this is the easiest way to get a model

bptt, em_sz, nh, nl = 70, 400, 1150, 3

drop_out = np.array([0.4, 0.5, 0.05, 0.3, 0.4])0.5drop_mult

drop_mult = 1.

dps = drop_out

ps = [0.1]

ps = [dps[4]] + ps

num_classes = 3 # this is the number of classes we want to predictlin_ftrs = [50]

layer = [em_sz * 3] + lin_ftrs + [num_classes]vs = len(self.tokenizer)

self.model = get_rnn_classifier(bptt, 20*70, num_classes, vs, emb_sz=em_sz, n_hid=nh, n_layers=nl, pad_token=1,

layers=layer, drops=ps, weight_p=dps[1], embed_p=dps[2], hidden_p=dps[3])self.model.load_state_dict(torch.load(os.path.join(dir_path, “model.pth”),

map_location=lambda storage, loc: storage))

**Error is**

RuntimeError: Error(s) in loading state_dict for SequentialRNN:

size mismatch for 0.encoder.weight: copying a param of torch.Size([5999, 400]) from checkpoint, where the shape is torch.Size([3699, 400]) in current model.

size mismatch for 0.encoder_dp.emb.weight: copying a param of torch.Size([5999, 400]) from checkpoint, where the shape is torch.Size([3699, 400]) in current model.

I could make sense of the error, But I don’t know how to tackle this error.

Help is appreciated, Thanks