Moving inference to the CPU

I’ve trained a model using v1. I now want to deploy the model and run predictions but only on CPU.

I’ve tried the following which is taken from this topic. The code looks like: = False
learn.model = learn.model.cpu()

When I attempt to obtain a prediction:


I get the following error:

RuntimeError: Expected object of backend CPU but got backend CUDA for argument #3 ‘index’

This error has also cropped up in the Productionizing models thread and is here

Any idea how to resolve the error?

1 Like

You model is on the CPU but your data is automatically on the GPU, that’s why you have this error. Either don’t put the model on the CPU or change the default device to default.device = torch.device('cpu').


Perfect! that did the trick thanks!

Hello @sgugger, in the notebook lesson2-download.ipynb, default.device = torch.device('cpu') does not work when I run it in my computer, neither fastai.defaults.device = torch.device('cpu')

What works is fastai.device = torch.device('cpu')

For you ?

Are you sure you are using the latest version?

I use the fastai version 1.0.39.dev0 in windows 10. It is not the last one ?

Yes, it’s the latest release, and defaults.device should work (if you don’t import * you will have to import defaults manually from its module).

You are right that defaults.device works but not (at least with my configuration) fastai.defaults.device as written in the notebook lesson2-download.ipynb.

More, fastai.device works as well.

See below my tests:


@sgugger, maybe a late one, but worth checking.

Regarding the inference of CPU, I have observed that half of the number CPU is using for batch preparation and the rest of half is always using for actual matrix calculation to get the predictions? Is anyway I can split usage of CPUs not half-half? I tried num_workers, but it seems do not have any impact?

In the first comment you stated it was default.device but in the second comment you stated defaults.device (with an s). That I think was the source of the confusion.
What worked for me was defaults.device = torch.device('cpu'), while fastai.device and default.device both did not work.

This could be set after my model has being loaded and trained in GPU?


This could be set after my model has being loaded and trained in GPU?

Very good question, do you have the answer ?

Set dls.device = “cpu”

fastai looks at the DataLoader for the device to use. (And then adjusted the device the model is on)

Note: this is for fastai v2

Thank you.

I have a last question.
On my computer I have a good GPU (NVIDIA GeForce GTX 1060 6GB), and I use my computer to create and test my model.

But I know my model wil be use on computer without GPU, so I want to test my model in real condition (without GPU).
My question is :
Can I create the model on my computer with my GPU, and after test and use it (on my computer) without GPU (to know the prediction time) ?

Or I need to create my model without GPU ?

I think I can create the model with my GPU, and when I use it I add this line :
defaults.device = torch.device(‘cpu’)

preds = learn_inf.predict(img)

Hi, you can create the model on your computer with GPU and use after that in a computer without it.