Instead of creating an encoder via AWD_LSTM
, I am now using
encoder_mul = MultiBatchEncoder(bptt, max_len, arch(vocab_sz, **config_mul), pad_idx=pad_idx)
encoder_mul.load_state_dict(torch.load('fine_tuned_enc.pth')
When I load the pre-trained weights into a MutliBatchEncoder
I get the following error -
Error(s) in loading state_dict for MultiBatchEncoder: Missing key(s) in state_dict: "module.encoder.weight", "module.encoder_dp.emb.weight", "module.rnns.0.weight_hh_l0_raw", "module.rnns.0.module.weight_ih_l0", "module.rnns.0.module.weight_hh_l0", "module.rnns.0.module.bias_ih_l0", "module.rnns.0.module.bias_hh_l0", "module.rnns.1.weight_hh_l0_raw", "module.rnns.1.module.weight_ih_l0", "module.rnns.1.module.weight_hh_l0", "module.rnns.1.module.bias_ih_l0", "module.rnns.1.module.bias_hh_l0", "module.rnns.2.weight_hh_l0_raw", "module.rnns.2.module.weight_ih_l0", "module.rnns.2.module.weight_hh_l0", "module.rnns.2.module.bias_ih_l0", "module.rnns.2.module.bias_hh_l0". Unexpected key(s) in state_dict: "encoder.weight", "encoder_dp.emb.weight", "rnns.0.weight_hh_l0_raw", "rnns.0.module.weight_ih_l0", "rnns.0.module.weight_hh_l0", "rnns.0.module.bias_ih_l0", "rnns.0.module.bias_hh_l0", "rnns.1.weight_hh_l0_raw", "rnns.1.module.weight_ih_l0", "rnns.1.module.weight_hh_l0", "rnns.1.module.bias_ih_l0", "rnns.1.module.bias_hh_l0", "rnns.2.weight_hh_l0_raw", "rnns.2.module.weight_ih_l0", "rnns.2.module.weight_hh_l0", "rnns.2.module.bias_ih_l0", "rnns.2.module.bias_hh_l0".