I am trying to look at the dot product based collaborative filtering, without any bias or additional layers.
I am getting a model which has user embeddings and movie embeddings.

How can I see the embeddings of a particular user or a particular movie?

However, I am not able to use an apply on this so that i can predict ratings of all the users for all movies.

import math
def sigmoid(x):
return 1 / (1 + math.exp(-x))
def predict_values(row):
users,movies = row['userId'],row['movieId']
um = (m.u(V(row['userId']))* m.m(V(row['movieId']))).sum(1)
res = um + m.ub(V(users)).squeeze() + m.mb(V(movies)).squeeze()
res = F.sigmoid(res) * (max_rating-min_rating) + min_rating
return res.view(res.size()[0],1)
ratings2['pred'] = ratings2.apply(predict_values,axis=1)

Gives me an error:-
RuntimeError: (âExpected tensor for argument #1 âindicesâ to have scalar type Long; but got CUDAFloatTensor instead (while checking arguments for embedding)â, âoccurred at index 0â)

It says that you have a wrong tensor type: you are passing a torch.FloatTensor (which was moved to the GPU with .cuda() call) but the model expects a scalar of long type. I guess you need check types of your arguments, and make sure that youâre not passing GPU tensor when a CPU one expected, and vice versa.

I can see that the errors occurs because of type mismatch.
For instance,

m.u(V(ratings2.loc[0]['userId']))
throws the same error.

RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got CUDAFloatTensor instead (while checking arguments for embedding)

seems like ratings2.loc[0][âuserIdâ] doesnât have the correct datatype?

Yes, youâre passing a plain Python scalar of type long, but I guess you should pass something like torch.tensor(ratings['userId'], dtype=torch.float).cuda() instead.

@ArchieIndian, did you end up solving this/can you link to the notebook? Iâve tried to re-create your approach here and am getting different predictions than learn.predict(row from dataframe) for some reason.
Would really appreciate it!