[lesson 6] The size of zero vector in Seq-to-Seq RNN & Embedding

Hi,

in “Returning sequences” section of the lesson 6 notebook Why the zeros vector has this shape

In [68]:
zeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))
zeros.shape
Out[68]:
(75110, 42)

This is equal to (number of samples X n_fac). why?

And a question about Embedding.
Each layer of Embedding is different from the others. because in the summary Keras counts 3570 (85*42) parameters for each embedding layer. That means each character in the sequence gains different n_fac = 42 parameters in different layers. Am I understanding correct?

Does the output of the Embedding have the size of n_fac = 42 ?

thanks

How can I delete the post! I wrote it in a wrong place. sorry.