I am currently training a model in colab. After training, I use learner.export() to create the pickle file. But I want to use this learner for inference on CPU on my local machine.
When I use load_learner() to load from the pickle file. I am getting below exception.
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=‘cpu’ to map your storages to the CPU.
Is there anything to do while exporting/loading to load the learner on CPU?
I think @PierreO just fixed it earlier today in this PR. Try using his fork for now or try fastai from master in a day, @sgugger will probably merge it
To avoid the issue entirely I went for the easiest way: the model is now saved on the CPU and load_learner will put it on defaults.device when loading (so the CPU in a CPU-only machine, the GPU if there is one). That means there is no more device or cpu argument.
so In case we have both CPU and GPU available on a machine but GPU is being taken by another user/program, how can we force fastAI V1 to use CPU and not GPU?
Hi Nisar, when you use learner.export() in colab, where does collar store the pickle file? I cannot seem to find the folder fast.ai is using to store all the data? thanks
Hi, I tried the same thing but without success. I’m getting still the message “Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False.”
I have one machine with a gpu for training and I want to export the learner in order to create a docker container with a small web application. Basically my project is based on this - https://github.com/simonw/cougar-or-not
Whenever I build the docker image I’m getting the following error message:
Traceback (most recent call last):
File "index.py", line 26, in <module>
learner = load_learner('.', 'export.pkl')
....
File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 79, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA 'RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
I’m using fast.ai version 1.0.42 on both environments.
What I’ve tried so far:
several commands on the jupyter notebook at my gpu machine in order to set the device to the cpu before I export my learner.
torch.cuda.set_device = torch.device(‘cpu’)
torch_core.defaults.device = ‘cpu’
defaults.device = torch.device(‘cpu’)
adjust load_learner command (l = load_learner(path=Path(’./’), cpu=True)). This gives me the error message: “TypeError: load_learner() got an unexpected keyword argument ‘cpu’” (I’m not sure if this improvement is already merged into the 1.0.42 release. )
tried to use the cpu environment at my gpu machine. Unfortunately the file environment-cpu.yml from the fastai repo is only for the fastai version 0.7.x and I’m not sure how to adjust setup.py for my cpu enviornment.
So I’m a bit stuck. Does anyone have a tip or a hint for me which points me into the correct direction?
Thanks!
The fix with cpu=True is in master only for now and will be in 1.0.43 when we release it. It’s also possible that what the part that are serialized on the GPU by mistake are also fixed in master so I’d definitely suggest that you try with a developer install.
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=‘cpu’ to map your storages to the CPU.