Unrolling factor in RNN created in lesson 6

In the lesson 6 notebook, a simple RNN was created to make a character level language model.


Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation=‘relu’, recurrent_initializer=‘identity’),
Dense(vocab_size, activation=‘softmax’)

Of course, any RNN has to be unrolled so that (truncated) backpropagation through time can be applied. My question is that how does keras know the depth of unrolling to perform? Does Keras take the input_length=cs to decide the level of unrolling?

Further, it seems to me that the BPTT truncation depth should be the same as unrolling factor. Is this correct?