Exporting text classifier for some reason lowers its accuracy

So I have noticed that the act of exporting a trained text classifier will lower its accuracy against a test set, can’t for the life of me figure out why this is happening. It both corrupts the classifier saved on the variable as well as the exported pickle file. Here’s some example code:

import copy

print(len(test_incorrects(classifier_v1, test_df_v1)))

copied_classifier = copy.deepcopy(classifier_v1)
print(len(test_incorrects(copied_classifier, test_df_v1)))

classifier_v1.export('charge_statute_model_spp_64_40drop_v1.pkl')
# Exporting this classifier immediately "corrupts" the classifier associated with this variable and lowers accuracy
print(len(test_incorrects(classifier_v1, test_df_v1)))

# The copied classifier is not affected and still performs the same
print(len(test_incorrects(copied_classifier, test_df_v1)))

# Loading the pickle file previously exported has similarly but slightly worse "corrupted" accuracy
loaded_classifier = load_learner('.', 'charge_statute_model_spp_64_40drop_v1.pkl')
len(test_incorrects(loaded_classifier, test_df_v1))

The results from the above code are:
260

260

386

260

Out[50]:

398

def test_incorrects(classifier_arg, tdf, input_arg='input'):    
    test_learner = classifier_arg
    test_learner.data.add_test([x.upper() for x in tdf[input_arg]])
    preds,y = test_learner.get_preds(ds_type=DatasetType.Test)
    labels = np.argmax(preds, 1)
    tdf['ulmfit_model_pred'] = [test_learner.data.classes[idx] for idx in labels]
    tdf['ulmfit_model_confidence'] = [float(preds[i][idx]) for i, idx in enumerate(labels)]
    return tdf[tdf['label'] != tdf['ulmfit_model_pred']]

This is preventing me from essentially saving any of my work without ruining the model quality, if anyone can offer some help that’d be greatly appreciated.

I am using fastai v1 and am running this on SageMaker

Bumping on this one