LanguageLearner output interpretation

What is the output of LanguageLearner’s encoder model? Why is it duplicated?

By encoder I mean learn.model[0]
For example for AWD-LSTM (unidirectional, using QRNN) it is

AWD_LSTM(
(encoder): Embedding(60000, 400, padding_idx=1)
(encoder_dp): EmbeddingDropout(
(emb): Embedding(60000, 400, padding_idx=1)
)
(rnns): ModuleList(
(0): QRNN(
(layers): ModuleList(
(0): QRNNLayer(
(linear): WeightDropout(
(module): Linear(in_features=800, out_features=3456, bias=True)
)
)
)
)
(1): QRNN(
(layers): ModuleList(
(0): QRNNLayer(
(linear): WeightDropout(
(module): Linear(in_features=1152, out_features=3456, bias=True)
)
)
)
)
(2): QRNN(
(layers): ModuleList(
(0): QRNNLayer(
(linear): WeightDropout(
(module): Linear(in_features=1152, out_features=1200, bias=True)
)
)
)
)
)
(input_dp): RNNDropout()
(hidden_dps): ModuleList(
(0): RNNDropout()
(1): RNNDropout()
(2): RNNDropout()
)
)

I’ve printed out shapes of output - it contains something like this:

([torch.Size([1, 26, 1152]), torch.Size([1, 26, 1152]), torch.Size([1, 26, 400])].
[torch.Size([1, 26, 1152]), torch.Size([1, 26, 1152]), torch.Size([1, 26, 400])]
)

What confuses me is that it seems that corresponding elements (first element of first list of first tuple, and first element of first list of second tuple) are exactly the same.

Is this some artifact of using bidirectional model, so output of unidirectional and bidirectional model are the same?