Making predictions with collaborative filtering

I had the same confusion. And as far I as I understand, after adding a new user to the system and collecting some data about this particular new user’s preferences (the user should rate some movies from his or her perspective) we need to retrain the whole network taking in consideration this new data.
Initially it seemed to me as kind of “overkill” - you need to recalibrate the whole system to make predictions for one particular user. But generally speaking it is seems to be unavoidable if we want to implement real “collaborative” filtering i.e. each user’s choice should influence on all system’s predictions.

Practically we can recalculate our model e.g. once per day, and for complete “freshmans” we can made best guess by providing the mean (or median) rating for particular movie among all users. The idea of using median embeding vector as input for the model seems interesting, and much more computationally effective, but I’m not sure that we will get the same result, and (it is just technical implementation question, but) also I’m wonder how can we provide embedding vector as an input to the model (model from the lesson expect user_ids, and movie_ids as an input).

The other question: could we train the new network in more efficient way (quicker) if we have previously trained network and just several new users added? Can we freeze all the weighs except embeddings of new user(s) and do some kind of fine tuning? I plan to experiment and share the results.

5 Likes