In lesson 4, there was an introduction to collaborative filtering. The excel table at 1:43:20 (https://youtu.be/V2h3IOBDvrA?t=1h43m20s) explained the math behind it very good.

However, I do not understand how to make a prediction for a new user. In the process of training, each user gets assigned 5 factors. But if I now have a new user, how do I find out those 5 factors for him?

We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.

model.predict([np.array([3]), np.array([6])])

I do not understand how it can be useful to ask the network how much a user, which was already existent when training the network, likes a particular movie. I would rather like to ask the network â€śGiven movies w, x, y with rating a, b, c, how much would the user like movie z?â€ť

I had the same confusion. And as far I as I understand, after adding a new user to the system and collecting some data about this particular new userâ€™s preferences (the user should rate some movies from his or her perspective) we need to retrain the whole network taking in consideration this new data.
Initially it seemed to me as kind of â€śoverkillâ€ť - you need to recalibrate the whole system to make predictions for one particular user. But generally speaking it is seems to be unavoidable if we want to implement real â€ścollaborativeâ€ť filtering i.e. each userâ€™s choice should influence on all systemâ€™s predictions.

Practically we can recalculate our model e.g. once per day, and for complete â€śfreshmansâ€ť we can made best guess by providing the mean (or median) rating for particular movie among all users. The idea of using median embeding vector as input for the model seems interesting, and much more computationally effective, but Iâ€™m not sure that we will get the same result, and (it is just technical implementation question, but) also Iâ€™m wonder how can we provide embedding vector as an input to the model (model from the lesson expect user_ids, and movie_ids as an input).

The other question: could we train the new network in more efficient way (quicker) if we have previously trained network and just several new users added? Can we freeze all the weighs except embeddings of new user(s) and do some kind of fine tuning? I plan to experiment and share the results.

maciejkula, the author of the popular lightfm and the newly released spotlight library has given a talk on the same where he speaks about building effective recsys for new users too. Instead of estimating latent vector for user and item, he suggests estimating latent vectors between user and item metadata. This metadata might help for new / rare users. More here: https://youtu.be/EgE0DUrYmo8?t=533