Quick doubt ablut RNN layers - Lecture 6

Hi,

I am little confused about how c2 is applied to dense_in. In diagram its said that c2 (second char input) joins the output of the activation of first character input(c1) at the second hidden layer. But in code c2 is first applied to dense_in which is very first activation layer and then merged with c1 activation output from the layer 2. This means dense_hidden layer (activation layer 2) not even applied on c2 at all. This is not what diagram has to me. But it seems I am missing something or it is that diagram needs little change? Please help.

The flow from the program looks like
c1,c2,c3 all goes to dense_in layer (first activation) spits out c1’, c2’, c3’.(read “’” as prime).
then c1 goes to hidden layer spits out c1" (double prime) and merge with c2’ spits out c12’.
then c12’ goes to hidden layer spits out c12" (double prime) and then merged with c3’ and spits out c123"(double prime).
This is worst way of expressing but helped me to organize the layer and it’s applications.