I was looking at the output of the encoder.
- First trained a Language model, saved the encoder. Get the output of this encoder, when I input a text = t
- Load the same encoder in a classifier now. This classifier has the same configuration as the language model. Get the encoder using
learn.model[0]
. Store the output of this encoder for the same input text = t
However, in the above two cases I am getting entirely different tensors. I get that the dimension of the outputs is different - only raw_outputs
and outputs
in case 1, raw_outputs
, outputs
and mask
in case2. But I thought that the outputs for both the encoders will be same. Can anyone explain me the cause of the difference.
In the meanwhile, I am trying to understand the difference by looking at the fastai code.