Load_learner on CPU throws "RuntimeError('Attempting to deserialize object on a CUDA)"

Thanks :slight_smile: . Will try this

Yes it’s merged. I just changed device to a flag cpu(=True if you want to load on the cpu) because another device than ‘cpu’ wasn’t working.

May I ask when would be the next release with this bug fix?

I’m having the same problem. I have upgraded my fastai package from the updated repo just now and confirmed the new PR is there.

Either

torch.load(open(‘data/export.pkl’, ‘rb’), map_location=‘cpu’)

or

learn = load_learner(’./data’, cpu=True)

produce the same error message. Any suggestion?

Thanks

I have a the same issue as @sunhwan. Any suggestions on how to fix this would be awesome.

To avoid the issue entirely I went for the easiest way: the model is now saved on the CPU and load_learner will put it on defaults.device when loading (so the CPU in a CPU-only machine, the GPU if there is one). That means there is no more device or cpu argument.

2 Likes

It works now. Thanks @sgugger

1 Like

I am running into the same issue and still can’t figure out. Really appreciate any help…

I trained a image classifier and use learn.export() created ‘export.pkl’ file and then download to my mac. try to load it.

defaults.device = torch.device(‘cpu’)
learn = load_learner(’/Users/Bliu/Desktop/Sample’)

Version:
Successfully installed fastai-1.0.43.dev0 nvidia-ml-py3-7.352.0

Note that you need a recent version on each side, you won’t be able to use load_learner with an old export.

so In case we have both CPU and GPU available on a machine but GPU is being taken by another user/program, how can we force fastAI V1 to use CPU and not GPU?

Just change defaults.device to cpu. To be completely sure, add a torch.cuda.set_device('cpu').

1 Like

It worked now thank you!

Hi Nisar, when you use learner.export() in colab, where does collar store the pickle file? I cannot seem to find the folder fast.ai is using to store all the data? thanks

Hi, I tried the same thing but without success. I’m getting still the message “Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False.”

I have one machine with a gpu for training and I want to export the learner in order to create a docker container with a small web application. Basically my project is based on this - https://github.com/simonw/cougar-or-not
Whenever I build the docker image I’m getting the following error message:

Traceback (most recent call last):
File "index.py", line 26, in <module>
learner = load_learner('.', 'export.pkl')
....
File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 79, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA 'RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.

I’m using fast.ai version 1.0.42 on both environments.
What I’ve tried so far:

  • several commands on the jupyter notebook at my gpu machine in order to set the device to the cpu before I export my learner.
    torch.cuda.set_device = torch.device(‘cpu’)
    torch_core.defaults.device = ‘cpu’
    defaults.device = torch.device(‘cpu’)
  • adjust load_learner command (l = load_learner(path=Path(’./’), cpu=True)). This gives me the error message: “TypeError: load_learner() got an unexpected keyword argument ‘cpu’” (I’m not sure if this improvement is already merged into the 1.0.42 release. )
  • tried to use the cpu environment at my gpu machine. Unfortunately the file environment-cpu.yml from the fastai repo is only for the fastai version 0.7.x and I’m not sure how to adjust setup.py for my cpu enviornment.

So I’m a bit stuck. Does anyone have a tip or a hint for me which points me into the correct direction?
Thanks!

The fix with cpu=True is in master only for now and will be in 1.0.43 when we release it. It’s also possible that what the part that are serialized on the GPU by mistake are also fixed in master so I’d definitely suggest that you try with a developer install.

Hi, okay thank you. I tried to use the developer install within the docker container and I can confirm that it is now working as expected.

1 Like

@sgugger Hi, I’m attempting to train on GPU then deploy on CPU (even though deployment test device has a GPU). Using v1.0.46.

On training side after training and saving my model I’m doing:

defaults.device = 'cpu'
learn.load('best')
learn.path = model_dir
learn.model.eval()
learn.export()

Then on serving side:

defaults.device = 'cpu'
learn = load_learner(model_dir, fname='export.pkl')

But I still get the error:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=‘cpu’ to map your storages to the CPU.

Would you please point out where I’m going wrong?

defaults.device should be a torch.device, so torch.device('cpu').

1 Like

The 1.0.43 solved this load_learner issue for me.

But I am now attempting to use a GPU trained language model on my local CPU to train a text classifier, and I am getting the same RuntimeError when I try to load the encoder:

learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5)
learn.load_encoder(name='xxx')

Is there a work around for this?

Good point, I just added the option in master to pass a device (like load has).
If you don’t have a dev install, a workaround is to load your encoder file on the CPU then save it again:

encoder = torch.load(path/'models'/'xxx', map_location='cpu')
torch.save(encoder, path/'models'/'xxx')
1 Like