Fastai size mismatch on EC2 but not local machine

I am trying to launch a web app using fastai on AWS EC2 instance, but I am getting a size mismatch error between the checkpoint and the current model.

However, this web app works fine on my local machine using the same files, I have been testing it locally for days and it still does not run into this issue.

Any advice? I thought it was something to do with the package versions but so far I haven’t gotten anywhere with that.

          with open("static/data/train_df.pkl", "rb") as training:
            train_df = pickle.load(training)
          with open("static/data/valid_df.pkl", "rb") as validation:
            valid_df = pickle.load(validation)
          data_lm = TextLMDataBunch.from_df('data', train_df, valid_df,
                                    text_cols='title')
          learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.5)
          # Load Model
          os.chdir('static')
          learn.load_encoder('ml_ft_enc')
          prediction = learn.predict("string", n_words=20, temperature=0.8).split('xxbos')[0]
         os.chdir('../')


RuntimeError: Error(s) in loading state_dict for AWD_LSTM:
size mismatch for encoder.weight: copying a param with shape torch.Size([6399, 400]) from checkpoint, the shape in current model is torch.Size([6389, 400]).
size mismatch for encoder_dp.emb.weight: copying a param with shape torch.Size([6399, 400]) from checkpoint, the shape in current model is torch.Size([6389, 400]).