A model training on multi GPUs raised "SyncBatchNorm expected ..." while inference on cpu

I have a model which was training on multiple GPU env. The code is like:
timm_model = timm.create_model(model_name=‘resnet34’,num_classes=10)
learn = Learner(dls,timm_model, metrics=[accuracy,top_k_accuracy]).to_fp16()
with learn.distrib_ctx(): learn.fine_tune(10)
But when inference on CPU, which raise “SyncBatchNorm expected … tensor on GPU”, the python code is just load_learner and predict.
I investigated many articles and issues, but no clear, effective and efficient explanation.
What should I do before export? Either load?

Thanks for a lot.