Hi all,
i am struggling to deeply understand Embeddings.
in collaborative-filtering-deep-dive.ipynb it is defined like
Embedding: Multiplying by a one-hot-encoded matrix, using the computational shortcut that it can be implemented by simply indexing directly. This is quite a fancy word for a very simple concept. The thing that you multiply the one-hot-encoded matrix by (or, using the computational shortcut, index into directly) is called the embedding matrix.
- my first problem is that i can not see through / understand what one-hot-encoding has to do with embeddings
i understand that
user_factors.t() @ one_hot_3 is the same as user_factors[3]
but i can not see one-hot-encoding later in the lesson, not even when Jeremy builds his own embedding module from scratch
he creates embeddings just by defining and calling this function:
def create_params(size):
return nn.Parameter(torch.zeros(*size).normal_(0, 0.01))
...
self.user_factors = create_params([n_users, n_factors])
...
- i also don’t have clear understanding how embeddings are used in Tabular as categorical embeddings
eg. embedding size for Titanic’s Sex feature is (3,3)
Sex has 2 unique values so why does it need 3
also Pclass has 3 unique values but the emb size is (4,3), why 4?
it seems it is calculated as unique_values+1, but why?
thank you for the help!