Size mismatch when loading saved model

Hi, I’ve downloaded a dataset of about 13 million reviews from BoardGameGeek and tried collab learning on it. Having lots of fun with it and plan to make a blog out of it.

However when I want to load a saved model in a new session, I get an error.

size mismatch for u_weight.weight: copying a param with shape torch.Size([277772, 50]) from checkpoint, the shape in current model is torch.Size([277751, 50]).
size mismatch for u_bias.weight: copying a param with shape torch.Size([277772, 1]) from checkpoint, the shape in current model is torch.Size([277751, 1]).

The error is pretty, clear, but I don’t know why it is happening, since the dataset is the same. The only steps I take are:

  • create Databunch
  • create Learner
  • load model

See my code below:

np.random.seed=42
data = CollabDataBunch.from_df(reviews, user_name=‘user’,item_name=‘name’,rating_name=‘rating’,bs=100000)
data.show_batch()
learner = collab_learner(data, n_factors=50, y_range=(2.,10))
learner.load(‘3cycles1e-2-bs100000factors50yrange2-10wd005’)
learner.model

Any help would be greatly appreciated!

Never mind I found out I should have passed a seed = 42 in the construction of the databunch.
Case closed :slight_smile:

Is there a way to load a saved model if you don’t know what the original seed was?

I don’t know.
Potentially you could write a for loop with a try except clause in it and just try out all numbers between 0-100. That would only work if the model was generated with an explicitly set seed though. And obviously the creator didnt choose 193822 to be the random seed :slight_smile:

hi @jvanelteren

I tried to add seed to my dataloader, but it still happens.

I’m using fastai 2.4 to do Text Classification.

Here is my code:

dls_lm = TextDataLoaders.from_df(words_df,
                                 text_col='Question',
                                 valid_pct = .2,
                                 is_lm = True,
                                 seq_len = 22,
                                 bs = 64,
                                 seed=20)
learn = language_model_learner(dls_lm, 
                               AWD_LSTM,
                               drop_mult = .4,
                               metrics = [accuracy, Perplexity()]).to_fp16()

# some fit and training.....
learn.save_encoder('finetuned_lng_encoder')


dls_blk = DataBlock(blocks = (TextBlock.from_df(text_cols = "text", seq_len = 22),
                              CategoryBlock),
                    get_x = ColReader(cols='text'),
                    get_y = ColReader(cols = "label"),
                    splitter = TrainTestSplitter(test_size = 0.2, random_state = 21, stratify=df_small.label))

dls_clf = dls_blk.dataloaders(df_small,
                              bs = 64,
                              seq_len=22,
                              seed = 20)

learn_tc = text_classifier_learner(dls_clf, 
                                    AWD_LSTM, 
                                    drop_mult=0.4,
                                    metrics = accuracy_multi).to_fp16()

learn_tc = learn_tc.load_encoder("finetuned_lng_encoder")

Then I got this error: