You model is on the CPU but your data is automatically on the GPU, that’s why you have this error. Either don’t put the model on the CPU or change the default device to default.device = torch.device('cpu').
Hello @sgugger, in the notebook lesson2-download.ipynb, default.device = torch.device('cpu') does not work when I run it in my computer, neither fastai.defaults.device = torch.device('cpu')
You are right that defaults.device works but not (at least with my configuration) fastai.defaults.device as written in the notebook lesson2-download.ipynb.
Regarding the inference of CPU, I have observed that half of the number CPU is using for batch preparation and the rest of half is always using for actual matrix calculation to get the predictions? Is anyway I can split usage of CPUs not half-half? I tried num_workers, but it seems do not have any impact?
In the first comment you stated it was default.device but in the second comment you stated defaults.device (with an s). That I think was the source of the confusion.
What worked for me was defaults.device = torch.device('cpu'), while fastai.device and default.device both did not work.
I have a last question.
On my computer I have a good GPU (NVIDIA GeForce GTX 1060 6GB), and I use my computer to create and test my model.
But I know my model wil be use on computer without GPU, so I want to test my model in real condition (without GPU). My question is :
Can I create the model on my computer with my GPU, and after test and use it (on my computer) without GPU (to know the prediction time) ?
Or I need to create my model without GPU ?
I think I can create the model with my GPU, and when I use it I add this line :
defaults.device = torch.device(‘cpu’)
…
preds = learn_inf.predict(img)