Matrix Factorization is one of the CF techniques, where effectively the likelihood of a user liking an item is expressed as the dot product of their respective embeddings/latent vectors. The approach optimizes the chosen loss through SGD.
Now these embeddings are optimized for the loss of the ground truths(implicit or explicit ratings) and the embedding dot products.
Since it is the embeddings that we are learning, I was thinking if it was possible to instead optimize the euclidean distance b/w the user-item embeddings. We can then simply perform a k-nn search in the embedding space for similar products, etc. At the same time if we have any negative interactions, we could attempt to increase the distance b/w the embeddings.
@radek , apart from applying the above approach to user-item interaction matrix, one could also apply the same to item-item co-visitation matrix, about which I learnt during the OTTO competition.
I’d like to hear suggestions from learned folks on weather or not this is feasible, and if yes how I might go about implementing this.
Thanks in advance