Chapter 08 - Collaborative Filtering Error

Hey guys, I was trying to follow along Chapter 08 on Collaborative Filtering of the fastbook on Google Colab, and I came across this couple of errors.

Firstly, when we attempt to create an Embedding layer from scratch, the nn.Parameter object is on the cpu by default for some reason, which leads to this error, RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

I tried to solve this error by returning nn.Parameter(torch.zeros(*size).normal_(0,0.01)).to("cuda") from the create_params function, which then led to the next error, IndexError: list index out of range.

I’m not sure if I did something wrong, so I’ll put in my code below,

def create_params(size):
    return nn.Parameter(torch.zeros(*size).normal_(0,0.01)).to("cuda")

class DotProductBias(Module):
  def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
    self.user_factors = create_params([n_users,n_factors])
    self.user_bias = create_params([n_users])
    self.movie_factors = create_params([n_movies,n_factors])
    self.movie_bias = create_params([n_movies])
    self.y_range = y_range
  
  def forward(self, x):
    users = user_factors[x[:,0]]
    movies = movie_factors[x[:,1]]
    res = (users * movies).sum(dim=1)
    res += self.user_bias[x[:,0]] + self.movie_bias[x[:,1]]
    return sigmoid_range(res, *self.y_range)

Any help would be great!

Hi there (sorry for the late reply), but maybe you could change the runtime to only be one or the other. If you’re using an online coding environment, you should be able to change the runtime so that just the hardware accelerator is on (the GPU), and not both the CPU and GPU on at the same time.

If you’re doing this from your own computer (assuming that you have a Nvidia GPU to use for training your model), then follow this guide on how to make a Jupyter notebook run on the GPU.

Hope this helps!