Moving inference to the CPU

I’ve trained a model using v1. I now want to deploy the model and run predictions but only on CPU.

I’ve tried the following which is taken from this topic. The code looks like: = False
learn.model = learn.model.cpu()

When I attempt to obtain a prediction:


I get the following error:

RuntimeError: Expected object of backend CPU but got backend CUDA for argument #3 ‘index’

This error has also cropped up in the Productionizing models thread and is here

Any idea how to resolve the error?

1 Like

You model is on the CPU but your data is automatically on the GPU, that’s why you have this error. Either don’t put the model on the CPU or change the default device to default.device = torch.device('cpu').


Perfect! that did the trick thanks!

Hello @sgugger, in the notebook lesson2-download.ipynb, default.device = torch.device('cpu') does not work when I run it in my computer, neither fastai.defaults.device = torch.device('cpu')

What works is fastai.device = torch.device('cpu')

For you ?

Are you sure you are using the latest version?

I use the fastai version 1.0.39.dev0 in windows 10. It is not the last one ?

Yes, it’s the latest release, and defaults.device should work (if you don’t import * you will have to import defaults manually from its module).

You are right that defaults.device works but not (at least with my configuration) fastai.defaults.device as written in the notebook lesson2-download.ipynb.

More, fastai.device works as well.

See below my tests:


@sgugger, maybe a late one, but worth checking.

Regarding the inference of CPU, I have observed that half of the number CPU is using for batch preparation and the rest of half is always using for actual matrix calculation to get the predictions? Is anyway I can split usage of CPUs not half-half? I tried num_workers, but it seems do not have any impact?

In the first comment you stated it was default.device but in the second comment you stated defaults.device (with an s). That I think was the source of the confusion.
What worked for me was defaults.device = torch.device('cpu'), while fastai.device and default.device both did not work.

This could be set after my model has being loaded and trained in GPU?

1 Like