Weight conversion from GPU to CPU

Hello, I have trained a test model on Google Colab. I would like to load the trained weights so I can run the model locally on my Mac. When I import the weights the labels seem to be all mixed up so it looks like it is predicting wrong.

I have tried to convert the weights from the colab GPU to CPU but I was wondering if this is even possible now that I’m using fast ai v1. I did find an old thread pre v1 that talked about problems converting weights with v1

1 Like

Even I’m getting the same kind of error, when I tried to use the ‘export.pkl’ downloaded from Colab, on my Mac
NotADirectoryError Traceback (most recent call last)
in
----> 1 learn=load_learner(‘export.pkl’)

~/miniconda3/envs/fastai/lib/python3.7/site-packages/fastai/basic_train.py in load_learner(path, file, test, tfm_y, **db_kwargs)
    616     "Load a `Learner` object saved with `export_state` in `path/file` with empty data, optionally add `test` and load on `cpu`. `file` can be file-like (file or buffer)"
    617     source = Path(path)/file if is_pathlike(file) else file
--> 618     state = torch.load(source, map_location='cpu') if defaults.device == torch.device('cpu') else torch.load(source)
    619     model = state.pop('model')
    620     src = LabelLists.load_state(path, state.pop('data'))

~/miniconda3/envs/fastai/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
    523         pickle_load_args['encoding'] = 'utf-8'
    524 
--> 525     with _open_file_like(f, 'rb') as opened_file:
    526         if _is_zipfile(opened_file):
    527             with _open_zipfile_reader(f) as opened_zipfile:

~/miniconda3/envs/fastai/lib/python3.7/site-packages/torch/serialization.py in _open_file_like(name_or_buffer, mode)
    210 def _open_file_like(name_or_buffer, mode):
    211     if _is_path(name_or_buffer):
--> 212         return _open_file(name_or_buffer, mode)
    213     else:
    214         if 'w' in mode:

~/miniconda3/envs/fastai/lib/python3.7/site-packages/torch/serialization.py in __init__(self, name, mode)
    191 class _open_file(_opener):
    192     def __init__(self, name, mode):
--> 193         super(_open_file, self).__init__(open(name, mode))
    194 
    195     def __exit__(self, *args):

NotADirectoryError: [Errno 20] Not a directory: 'export.pkl/export.pkl'

Also, when I’m restarting my Colab notebook and uploading the ‘export.pkl’ file, I’m getting an error something like this
RuntimeError Traceback (most recent call last)

[<ipython-input-15-222953ade575>](https://ouaavbvoydd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20200827-085601-RC00_328718762#) in <module>() ----> 1 l2=load_learner(fname='export.pkl')

2 frames

[/usr/local/lib/python3.6/dist-packages/torch/serialization.py](https://ouaavbvoydd-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20200827-085601-RC00_328718762#) in __init__(self, name_or_buffer) 239 class _open_zipfile_reader(_opener): 240 def __init__(self, name_or_buffer): --> 241 super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) 242 243

RuntimeError: [enforce fail at inline_container.cc:144] . PytorchStreamReader failed reading zip archive: failed finding central directory

It would be really helpful if someone can fix the problem, and say some workaround. I cannot upload my data to colab, because it consumes too much of my internet data, which is limited for me, so I have to do the inference on my local machine only.

Hi @avenio I was struggling with this error ‘PytorchStreamReader failed reading zip archive: failed finding central directory’ myself today when trying to load export.pkl from a Colab notebook

Here is what works for me:

  • Check if export.pkl is uploaded and available in the directory where you call load_learner(path/‘export.pkl’). Doing this should resolve the above error (at least for me it does)

Note that since we manually upload export.pkl into Runtime environment, it will be wiped off every time notebook disconnects. That means we need to reupload and make sure it’s there before running the above.

I found it much more productive to upload the model to my own folder in Google Drive and load it from there. Here’s how I do it:

# connect to drive
from google.colab import drive
drive.mount('/content/drive')

# I uploaded my export.pkl into My Drive folder

# from then on I can load it by running
learn_inf = load_learner('/content/drive/MyDrive/export.pkl')

Hope that helps!

1 Like