I created a text classification model with a fine-tuned language model. It does well on its validation set. I saved the model with learn.export()
.
Now the weird bit: I load the learner in a different notebook with learn = load_learner('')
and use it to make predictions with learn.predict(text)
. Only issue: it does much worse! Accuracy on the validation set seems to have dropped from ~60% to ~20%.
Do I also need to save the fine-tuned encoder or the vocab? I tried loading the encoder but learn.load_encoder('fine_tuned_enc')
gives an AssertionError.
Open to the possibility of this being something silly my side, so mainly want to find out if this is the right process for deploying text models. Is learn.export sufficient?