Role of input_length parameter in embedding layer

From Keras documentation,

input_length: Length of input sequences, when it is constant. This argument is required if you are going to connect Flatten then Dense layers upstream (without it, the shape of the dense outputs cannot be computed).

I don’t follow why input_length is needed, or useful for calculating the shape of dense outputs. Consider the following example,
n_hidden, n_fac, cs, vocab_size = (199, 50, 10, 86)

model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation=‘relu’, recurrent_initializer=‘identity’),
Dense(vocab_size, activation=‘softmax’)
])

This model has the following summary,

Layer (type) Output Shape Param #

embedding_6 (Embedding) (None, 10, 50) 4300


simple_rnn_6 (SimpleRNN) (None, 199) 49750


dense_6 (Dense) (None, 86) 17200

Note that the number of parameters in embedding layer are vocab_size * n_fac.

number of parameters in simple RNN layer is n_hidden * (n_hidden + n_fac + 1)

number of parameters in dense layer are vocab_size * (n_hidden + 1)

So, in fact, the input_length parameter was never used anywhere. Then why does the documentation say input_length is necessary?

Any ideas?