How are the Embeddings really continuous?

In Chapter 9, Tabular notebook, it is said that

In addition, it is valuable in its own right that embeddings are continuous, because models are better at understanding continuous variables.

If I’m not missing any key point here, this phrase is kinda misleading because I think they don’t mean the embeddings are continuous but instead the distance between embeddings is continuous. Am I right?

If you look at something like this:

emb = nn.Embedding(2, 10)
inp=torch.tensor([[0],[1]])
emb(inp)

You can see that I have a class 0 and a class 1 that are input into embedding, but the output of emb is 10 numbers that are all floats:

tensor([[[-0.1962, -0.1154, -0.1046,  1.5857,  1.4024, -0.2066,  0.2506,
           0.6266,  0.5403, -1.0044]],

        [[-0.2608,  1.1237, -0.0058,  2.6213, -2.2449,  0.6882,  0.5776,
          -1.2328,  0.2632,  0.5728]]], grad_fn=<EmbeddingBackward>)

So the embedding layer is taking the categorical values (0 and 1) and converting them into 10 continuous number that can be trained.

So before the only information that the model had as an input was a single value that was either a 0 or a 1 and by passing those through an embedding layer, it converted those 2 classes into 10 continuous values that the model could use to represent different information about those classes.

1 Like

I thought embedding was just indexing into array so that we can do the matrix multiplication. I didn’t know it converts discrete values to continuous values.