How to interpret the weights of a collab learner

I watched Lesson 5 a couple of times now and I can create some collaborative filtering models on my own. Well, they are not in production but I could replicate and understand everything Jeremy did. The problem is I can’t understand what the weights of the model mean. Not in a mathematical sense, but in a human interpretable way.
How can I get a better sense of what my embeddings mean? Jeremy used PCA to plot the weights of movielens and his explanation helped. But what about for other kinds of problems? Is there a way to know why a particular latent factor has a high value?