Input and hidden tensors on different device, after predict()

This is an API question for v 1.0.30.

Yesterday I trained up on cuda a language model on a custom corpus and saved. Today I loaded and used predict but it gave error. How do I move my hidden tensor to the cuda?

I already tried running these lines one at a time but none solved it, and searched forums and blogs and docs:

learner.load(name='red_model_2018jan_5layer') #,device='cuda') 
#learner.gpu()
#learner.model.cuda()
#learner.model = learner.model.cuda()
#learner.model.to('cuda')
#learner = learner.model.to('cuda')
#learner.model = learner.model.to('cuda')
#learner.d  # no learner.device()
learner.model.cuda() #docstring: Moves all model parameters and buffers to the GPU.
#learner.to_fp16()
learner.predict(text='news', n_words=5) #RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu

I was unable to find answer. It’s probably there I just can’t find it.

1 Like

Did you ever find a solution by any chance?