Load_learner on CPU throws "RuntimeError('Attempting to deserialize object on a CUDA)"

(Nisar Ahamed) #1


I am currently training a model in colab. After training, I use learner.export() to create the pickle file. But I want to use this learner for inference on CPU on my local machine.

When I use load_learner() to load from the pickle file. I am getting below exception.

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=‘cpu’ to map your storages to the CPU.

Is there anything to do while exporting/loading to load the learner on CPU?


(Nate Gadzhibalaev) #2

I think @PierreO just fixed it earlier today in this PR. Try using his fork for now or try fastai from master in a day, @sgugger will probably merge it :wink:

(Nisar Ahamed) #3

Thanks :slight_smile: . Will try this


Yes it’s merged. I just changed device to a flag cpu(=True if you want to load on the cpu) because another device than ‘cpu’ wasn’t working.

(Nisar Ahamed) #5

May I ask when would be the next release with this bug fix?

(Sunhwan Jo) #6

I’m having the same problem. I have upgraded my fastai package from the updated repo just now and confirmed the new PR is there.


torch.load(open(‘data/export.pkl’, ‘rb’), map_location=‘cpu’)


learn = load_learner(’./data’, cpu=True)

produce the same error message. Any suggestion?



I have a the same issue as @sunhwan. Any suggestions on how to fix this would be awesome.


To avoid the issue entirely I went for the easiest way: the model is now saved on the CPU and load_learner will put it on defaults.device when loading (so the CPU in a CPU-only machine, the GPU if there is one). That means there is no more device or cpu argument.

(Sunhwan Jo) #9

It works now. Thanks @sgugger

(Bin Liu) #10

I am running into the same issue and still can’t figure out. Really appreciate any help…

I trained a image classifier and use learn.export() created ‘export.pkl’ file and then download to my mac. try to load it.

defaults.device = torch.device(‘cpu’)
learn = load_learner(’/Users/Bliu/Desktop/Sample’)

Successfully installed fastai-1.0.43.dev0 nvidia-ml-py3-7.352.0


Note that you need a recent version on each side, you won’t be able to use load_learner with an old export.


so In case we have both CPU and GPU available on a machine but GPU is being taken by another user/program, how can we force fastAI V1 to use CPU and not GPU?


Just change defaults.device to cpu. To be completely sure, add a torch.cuda.set_device('cpu').


It worked now thank you!

(Gavin Armstrong) #15

Hi Nisar, when you use learner.export() in colab, where does collar store the pickle file? I cannot seem to find the folder fast.ai is using to store all the data? thanks


Hi, I tried the same thing but without success. I’m getting still the message “Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False.”

I have one machine with a gpu for training and I want to export the learner in order to create a docker container with a small web application. Basically my project is based on this - https://github.com/simonw/cougar-or-not
Whenever I build the docker image I’m getting the following error message:

Traceback (most recent call last):
File "index.py", line 26, in <module>
learner = load_learner('.', 'export.pkl')
File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 79, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA 'RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.

I’m using fast.ai version 1.0.42 on both environments.
What I’ve tried so far:

  • several commands on the jupyter notebook at my gpu machine in order to set the device to the cpu before I export my learner.
    torch.cuda.set_device = torch.device(‘cpu’)
    torch_core.defaults.device = ‘cpu’
    defaults.device = torch.device(‘cpu’)
  • adjust load_learner command (l = load_learner(path=Path(’./’), cpu=True)). This gives me the error message: “TypeError: load_learner() got an unexpected keyword argument ‘cpu’” (I’m not sure if this improvement is already merged into the 1.0.42 release. )
  • tried to use the cpu environment at my gpu machine. Unfortunately the file environment-cpu.yml from the fastai repo is only for the fastai version 0.7.x and I’m not sure how to adjust setup.py for my cpu enviornment.

So I’m a bit stuck. Does anyone have a tip or a hint for me which points me into the correct direction?


The fix with cpu=True is in master only for now and will be in 1.0.43 when we release it. It’s also possible that what the part that are serialized on the GPU by mistake are also fixed in master so I’d definitely suggest that you try with a developer install.


Hi, okay thank you. I tried to use the developer install within the docker container and I can confirm that it is now working as expected.