Load_learner stop loading my model I previously exported

I trained my model in google colab with GPU and export it to my google drive:

learn.export(’/content/gdrive/My Drive/md’)

And later it can be loaded in google colab:

learn2 = load_learner(’/content/gdrive/My Drive/’,‘md’)

But when I tried to do the same in my PC with CPU only after downloaded the model file, it failed:

learn2 = load_learner(’./analyics’,‘md’)

  File "F:\mydoc\git\test.py", line 165, in <module>
    load_learner('F:/mydoc/git/analyics','md'),
  File "E:\program\anaconda3\lib\site-packages\fastai\basic_train.py", line 621, in load_learner
    state = torch.load(source, map_location='cpu') if defaults.device == torch.device('cpu') else torch.load(source)
  File "E:\program\anaconda3\lib\site-packages\torch\serialization.py", line 586, in load
    with _open_zipfile_reader(f) as opened_zipfile:
  File "E:\program\anaconda3\lib\site-packages\torch\serialization.py", line 246, in __init__
    super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
AttributeError: 'WindowsPath' object has no attribute 'tell'

If I load another model Iexported several months ago (In March), it succeeded

learn3 = load_learner(’./analyics’,‘old_md’)

E:\program\anaconda3\lib\site-packages\torch\serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.loss.MSELoss' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
E:\program\anaconda3\lib\site-packages\torch\serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.container.ModuleList' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
E:\program\anaconda3\lib\site-packages\torch\serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
E:\program\anaconda3\lib\site-packages\torch\serialization.py:657: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)

I can see the different behavior is the new version uses zipfile while the old one doesn’t:

with _open_file_like(f, 'rb') as opened_file:
    if _is_zipfile(opened_file):
        with _open_zipfile_reader(f) as opened_zipfile:
            if _is_torchscript_zip(opened_zipfile):
                warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive"
                              " dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to"
                              " silence this warning)", UserWarning)
                return torch.jit.load(f)
            return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)

How to fix it? Or can we export with the legacy format?

OK. after upgrade my pytorch to 1.6.0, it is gone.

On using load_learner, I got the following error, any ideas on how to resolve the issue.

in
----> 1 learn_ett = load_learner(Path(’/baseline-b4’))

/opt/conda/lib/python3.7/site-packages/fastai/learner.py in load_learner(fname, cpu)
551 "Load a Learner object in fname, optionally putting it on the cpu"
552 distrib_barrier()
–> 553 res = torch.load(fname, map_location=‘cpu’ if cpu else None)
554 if hasattr(res, ‘to_fp32’): res = res.to_fp32()
555 if cpu: res.dls.cpu()

/opt/conda/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
592 opened_file.seek(orig_position)
593 return torch.jit.load(opened_file)
–> 594 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
595 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
596

/opt/conda/lib/python3.7/site-packages/torch/serialization.py in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
851 unpickler = pickle_module.Unpickler(data_file, **pickle_load_args)
852 unpickler.persistent_load = persistent_load
–> 853 result = unpickler.load()
854
855 torch._utils._validate_loaded_sparse_tensors()

AttributeError: Can’t get attribute ‘NonNativeMixedPrecision’ on <module ‘fastai.callback.fp16’ from ‘/opt/conda/lib/python3.7/site-packages/fastai/callback/fp16.py’>

Not sure if this is causing your issue but,
If the fastai version used to export is not the same as the fastai version used to import, you can get this error.

The error is that `NonNativeMixedPrecision’ method was in ‘fastai.callback.fp16’ module when exported, but in the current environment, is not there.