I have attempted to split my Tabular Learner in two , as I am trying to do some internal transformations decoupled from the network. In doing so I have created a hook that allows me to grab the softmax layer in my model. After doing some transformations to this softmax layer I would like to apply the last few steps in my learner object to get a prediction. But even if I don’t change anything I still do not succeed at getting the same predictions (running the predictions using learn.get_preds vs. applying the steps manually as can be seen below:
Learner object:
… (9) Softmax -> (10) BatchNorm1d -> (11) Dropout -> (12) Linear
softmax_activations = torch.Tensor(softmax_layer.features)
batch_layer = (learn.model.layers[-3])
dropout_layer = (learn.model.layers[-2])
linear_layer = (learn.model.layers[-1])
batch_transformed = batch_layer(softmax_activations )
dropout_transformed = dropout_layer(batch_transformed)
softmax_y_hat = linear_layer(dropout_transformed)
Even when I don’t alter anything, applying the last steps manually does not give me the predictions that I am looking for.
Why is softmax_y_hat not giving me the same predictions as learn.get_preds() does?