Is load_encoding loading more than just the first layer?

I notice that load_encoder/save_encoder for RNNLearner do not only load/save the first layer of the RNN but they actually load/save the whole network.

For example load_model []

for a text_classifier_learner, loads all the RNN layers



  (encoder): Embedding(24, 50, padding_idx=1)
  (encoder_dp): EmbeddingDropout(
    (emb): Embedding(24, 50, padding_idx=1)
  (rnns): ModuleList(
    (0): WeightDropout(
      (module): LSTM(50, 1152, batch_first=True)
    (1): WeightDropout(
      (module): LSTM(1152, 1152, batch_first=True)
    (2): WeightDropout(
      (module): LSTM(1152, 50, batch_first=True)
  (input_dp): RNNDropout()
  (hidden_dps): ModuleList(
    (0): RNNDropout()
    (1): RNNDropout()
    (2): RNNDropout()

Is that behaviour correct? Should load_encoder load only the embedding and the first layer of the RNN?