I suspect he’ll cover that in the ML class; replacing a “traditional” ML model with a deep learning model shouldn’t change the ensembling step.
See the forums where there are a couple of pointers to articles about this.
No fastai. I feel handicapped!
Sounds great! Thanks @yinterian
Some of the predictions are exceeding five, would this model perform better if the values were capped at zero and five?
How did you decide on the dimension of that matrix? Why 5 rows and not x?
Are those numbers embeddings?
The random numbers
Guess, that’s like features!
embedding for movies and users
What are the differences between features and embeddings? Different words same thing?
embeddings are learned
Those features which aren’t explicit! But we want NN to find them for us?
features need not necessarily be embeddings. feature could be as simple as ‘is the date a state holiday?’. Embeddings are representations of these features. I think they are the answer to the question : “How can we represent BladeRunner2049 numerically?”
Responding to the question “do you retrain for a new user or new movie?”
At this year’s Grace Hopper many companies mentioned that they approached that problem with the way Jeremy said, i.e., you’d have a specific new user model or new movie model, then retrain over time.
The “what are you interested in?” during onboarding (like meetup.com’s) is a popular approach, but IMO not very friendly if your user is not of your core user group for example.
What does it mean to try different n_factor sizes? Do you train and evaluate using MSE and compare with different embedding sizes given a number of epochs?
Yep, it’s a hyperparameter … so AFAIK no one has worked out a way of automatically picking the best number.
Correct!
thanks!
“…wait, what?..”
It’s so comforting that happens to you too @jeremy!
Any resource that helps explain a hyperparameter? Thanks in advance