I was creating databunch with from df, creating learner with Learner(because I wanted to use custom pytorch model)
magic_model = magic().cuda()
data = TabularDataBunch.from_df("", train_df, "target", valid_idx=valid_index, procs=procs,bs=batch_size,test_df=test_df)#.cuda()
learn = Learner(data, magic_model, loss_func=torch.nn.BCEWithLogitsLoss(), callback_fns=[partial(EarlyStoppingCallback, monitor='val_loss', min_delta=1e-3, patience=3)])
epoch train_loss valid_loss time
Traceback (most recent call last):
File "fastai3_gpu.py", line 95, in <module>
learn.fit_one_cycle(20, 1e-3,wd=0.05)
File "/xd/envs/py3.6/lib/python3.6/site-packages/fastai/train.py", line 22, in fit_one_cycle
learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
File "/xd/envs/py3.6/lib/python3.6/site-packages/fastai/basic_train.py", line 199, in fit
fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
File "/xd/envs/py3.6/lib/python3.6/site-packages/fastai/basic_train.py", line 101, in fit
loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)
File "/xd/envs/py3.6/lib/python3.6/site-packages/fastai/basic_train.py", line 26, in loss_batch
out = model(*xb)
File "/xd/envs/py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "fastai3_gpu.py", line 79, in forward
x = self.fc1(x)
File "/xd/envs/py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/xd/envs/py3.6/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 67, in forward
return F.linear(input, self.weight, self.bias)
File "/xd/envs/py3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 1352, in linear
ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t())
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #4 'mat1'
It seems to me that the model is gpu but data isn’t gpu. But in this Moving inference to the CPU it seems that they should all be gpu. what happened?
Also when I try to use them on gpu, same error shows up.
defaults.device = torch.device('cpu')
fastai.device = torch.device('cpu')