Making predictions with collaborative filtering

In lesson 4, there was an introduction to collaborative filtering. The excel table at 1:43:20 ( explained the math behind it very good.

However, I do not understand how to make a prediction for a new user. In the process of training, each user gets assigned 5 factors. But if I now have a new user, how do I find out those 5 factors for him?

I have the same problem with the keras implementation:

We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.

model.predict([np.array([3]), np.array([6])])

I do not understand how it can be useful to ask the network how much a user, which was already existent when training the network, likes a particular movie. I would rather like to ask the network “Given movies w, x, y with rating a, b, c, how much would the user like movie z?”


If you have no information about a new user, you could use a median or mode embedding, and use that to predict their preferences for movies.

1 Like

I am having the same confusion. How do I predict for a new user. any help is much appreciated @jeremy

1 Like

I had the same confusion. And as far I as I understand, after adding a new user to the system and collecting some data about this particular new user’s preferences (the user should rate some movies from his or her perspective) we need to retrain the whole network taking in consideration this new data.
Initially it seemed to me as kind of “overkill” - you need to recalibrate the whole system to make predictions for one particular user. But generally speaking it is seems to be unavoidable if we want to implement real “collaborative” filtering i.e. each user’s choice should influence on all system’s predictions.

Practically we can recalculate our model e.g. once per day, and for complete “freshmans” we can made best guess by providing the mean (or median) rating for particular movie among all users. The idea of using median embeding vector as input for the model seems interesting, and much more computationally effective, but I’m not sure that we will get the same result, and (it is just technical implementation question, but) also I’m wonder how can we provide embedding vector as an input to the model (model from the lesson expect user_ids, and movie_ids as an input).

The other question: could we train the new network in more efficient way (quicker) if we have previously trained network and just several new users added? Can we freeze all the weighs except embeddings of new user(s) and do some kind of fine tuning? I plan to experiment and share the results.


maciejkula, the author of the popular lightfm and the newly released spotlight library has given a talk on the same where he speaks about building effective recsys for new users too. Instead of estimating latent vector for user and item, he suggests estimating latent vectors between user and item metadata. This metadata might help for new / rare users. More here: