NLP: Exported model does worse than when saved

I created a text classification model with a fine-tuned language model. It does well on its validation set. I saved the model with learn.export().

Now the weird bit: I load the learner in a different notebook with learn = load_learner('') and use it to make predictions with learn.predict(text). Only issue: it does much worse! Accuracy on the validation set seems to have dropped from ~60% to ~20%.

Do I also need to save the fine-tuned encoder or the vocab? I tried loading the encoder but learn.load_encoder('fine_tuned_enc') gives an AssertionError.

Open to the possibility of this being something silly my side, so mainly want to find out if this is the right process for deploying text models. Is learn.export sufficient?

Using load_learner() wasn’t working. Instead, my workaround is to load the data, re-create the model, load the encoder and then load the model checkpoint from the end of training ( learn.load('stage-4_3') ).
This feels a little clunky - shouldn’t learn.export() save a copy of the trained model able to make the same kind of predictions?