Lesson 11 Classifier learn.load_encoder RuntimeError

Hey everyone,
so I was working on Lesson 11, following the course and using the fixed code from the gihub repo. The only thing I changed is basically my dataset, which is way smaller than the imdb one and the dropout multiplier. But those shouldn’t influence the dimensions of the model.
But when I try to load the before saved encoder I get the following error:

RuntimeError: Error(s) in loading state_dict for MultiBatchRNN:
size mismatch for rnns.0.module.weight_ih_l0: copying a param of torch.Size([4600, 400]) from checkpoint, where the shape is torch.Size([4612, 400]) in current model.
size mismatch for rnns.0.module.bias_ih_l0: copying a param of torch.Size([4600]) from checkpoint, where the shape is torch.Size([4612]) in current model.
size mismatch for rnns.0.module.bias_hh_l0: copying a param of torch.Size([4600]) from checkpoint, where the shape is torch.Size([4612]) in current model.
size mismatch for rnns.0.module.weight_hh_l0_raw: copying a param of torch.Size([4600, 1150]) from checkpoint, where the shape is torch.Size([4612, 1153]) in current model.
size mismatch for rnns.1.module.weight_ih_l0: copying a param of torch.Size([4600, 1150]) from checkpoint, where the shape is torch.Size([4612, 1153]) in current model.
size mismatch for rnns.1.module.bias_ih_l0: copying a param of torch.Size([4600]) from checkpoint, where the shape is torch.Size([4612]) in current model.
size mismatch for rnns.1.module.bias_hh_l0: copying a param of torch.Size([4600]) from checkpoint, where the shape is torch.Size([4612]) in current model.
size mismatch for rnns.1.module.weight_hh_l0_raw: copying a param of torch.Size([4600, 1150]) from checkpoint, where the shape is torch.Size([4612, 1153]) in current model.
size mismatch for rnns.2.module.weight_ih_l0: copying a param of torch.Size([1600, 1150]) from checkpoint, where the shape is torch.Size([1600, 1153]) in current model.

What could be the cause of that small dimensions changes?

Edit: So I apparently missed the line
learner.model.load_state_dict(wgts)
which causes the same error to occur even earlier

Solved it. If anyone gets that error, for me it was a typo in the model hidden size settings…