Lesson 6: How can I understand the number of weights of SimpleRNN?

In the ‘Our first RNN with Keras!’ - section of lesson 6 we defined the following RNN:

        Embedding(vocab_size, n_fac, input_length=cs),
        SimpleRNN(n_hidden, activation='relu', return_sequences=True, inner_init='identity'),
        TimeDistributed(Dense(vocab_size, activation='softmax'))

The embedding creates an output of size (num_samples, cs, n_fac).

The Simple RNN loops through index 1 and performs a dense input2hidden, a merge and a dense hidden2hidden per each of the cs timesteps.

I would therefore expect the number of weights to be

n_fac * n_hidden (input2hidden) + n_hidden (input2hidden: bias)
+n_hidden * n_hidden (hidden2hidden) + n_hidden (hidden2hidden: bias)
= n_hidden * n_hidden + n_fac * n_hidden + 2 * n_hidden

But it seems like the number of weights is just

n_hidden * n_hidden + n_fac * n_hidden + 1 * n_hidden

so it seems like either the input2hidden-layer or the hidden2hidden-layer has no bias. Is that correct? And if yes, which of the layers is it and why is it done like that?