Language_model_learner not working as before?

If you are doing from first, you need not give the pretrained_fnames parameter, the model directly loads pre-trained wiki-text weights. It is required only if you are using your custom pre-trained weights.

Maybe some context was missing :wink:

These are not “my” pre-trained weights. This is a spanish text generation project so I start with encoder and itos files already done by someone else.

And the problem is that this has stopped working… with the very same files

Oh okay got it, but, the error means that the itos and encoder are of different data. I also faced the same issue, it was resolved when I changed them accordingly. I think they might have been over-written some how.May be take them again from the actual source and try again!

I just double-checked… but they are still the same original files from December 2018 that have been working until now… so the change must be in the fastai library way of loading them?

Hii is your problem solved? I tried your code, it’s working when I load my custom pre-trained model. Please check loading your own model(may be for english languge) and try to replicate the error. You can also place the encoder and itos in some different path and try loading them.

It seems to work for me with default english…

The one I am trying to use is the one found here (used by other people and by myself several times)
Would you be so kind to try loading it? Thanks!

Okay I will try!

Your issue comes from the breaking change in fastai v1.0.53 making the hidden size a multiple of 8 (1152 instead of 1150) and your pretrained weights have the old size (1150). Just pass along this config to deal with it:

config = awd_lstm_lm_config.copy()
config['n_hid'] = 1150
learn = language_model_learner(data_lm, AWD_LSTM, config=config,
                               pretrained_fnames=[FILE_LM_ENCODER, FILE_ITOS], 
                               drop_mult=0.3)
19 Likes

@sgugger ! Thanks A LOT!
You have saved my day… I am in the middle of a project and I was entering panic mode lol.
I was even creating a new GCP instance because I thought I had broken something in my current one…
I cannot thank enough and I can only click once in the heart :heart:

BTW, what would be the best way to be up to date of these changes so my heart does not bump next time?

1 Like

Changes are posted here and there. In this case, I forgot to put the way around it for people with their own pretrained weights, sorry about that.

1 Like

Hi, @sgugger I tried the same but it did not work. Ther same error as above is thrown when I run my learn.load() function, while the language_model_learner() line ran without issues.
Is there a way I can go back to the previous versions and then run my code?
I am using the Gujarati and Hindi ULM-fit pre-trained language models.

Hey man, were you able to resolve this? I am facing the same problem using a pretrained Ulmfit-model for German. Configuring the hidden size to 1150 also did not help.

I imported a previous version of fastai. uninstall fastai and then install the version which was working for you

That also worked for me on colab, thank you!

@aditya8952: using the flag pretrained=False in language_model_learner() solved the issue for me.

Hey i had the same problem but it worked for me
make sure to mention the parameter config = config in the learn definition, maybe u missed that.

Facing the same problem with Bengali here.
Couldn’t load my previous pretrained models a couple of days back. So I started from scratch. It was working fine till yesterday. Now I’m facing the same problem. The dataset, transforms, splits, model everything is identical. Just running the same code.

Were you able to fix it?

I still am getting error

RuntimeError: Error(s) in loading state_dict for SequentialRNN:
size mismatch for 0.encoder.weight: copying a param with shape torch.Size([60002, 400]) from checkpoint, the shape in current model is torch.Size([7248, 400]).
size mismatch for 0.encoder_dp.emb.weight: copying a param with shape torch.Size([60002, 400]) from checkpoint, the shape in current model is torch.Size([7248, 400]).
size mismatch for 1.decoder.weight: copying a param with shape torch.Size([60002, 400]) from checkpoint, the shape in current model is torch.Size([7248, 400]).
size mismatch for 1.decoder.bias: copying a param with shape torch.Size([60002]) from checkpoint, the shape in current model is torch.Size([7248]).

@obiwan
i have the same problem,have you fixed it yet?

thank you so much, you saved my day !