Hi, I tried the same thing but without success. I’m getting still the message “Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False.”
I have one machine with a gpu for training and I want to export the learner in order to create a docker container with a small web application. Basically my project is based on this - https://github.com/simonw/cougar-or-not
Whenever I build the docker image I’m getting the following error message:
Traceback (most recent call last):
File "index.py", line 26, in <module>
learner = load_learner('.', 'export.pkl')
File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 79, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA 'RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
I’m using fast.ai version 1.0.42 on both environments.
What I’ve tried so far:
- several commands on the jupyter notebook at my gpu machine in order to set the device to the cpu before I export my learner.
torch.cuda.set_device = torch.device(‘cpu’)
torch_core.defaults.device = ‘cpu’
defaults.device = torch.device(‘cpu’)
- adjust load_learner command (l = load_learner(path=Path(’./’), cpu=True)). This gives me the error message: “TypeError: load_learner() got an unexpected keyword argument ‘cpu’” (I’m not sure if this improvement is already merged into the 1.0.42 release. )
- tried to use the cpu environment at my gpu machine. Unfortunately the file environment-cpu.yml from the fastai repo is only for the fastai version 0.7.x and I’m not sure how to adjust setup.py for my cpu enviornment.
So I’m a bit stuck. Does anyone have a tip or a hint for me which points me into the correct direction?