What would be the best way to get activations of penultimate dense layer of RNNLearner.classifier, i.e. the one I highlighted in bold below, after the model is trained:
I used to pass the inputs directly to the trained model one by one, but it looks like there should be some easier and more efficient way to get the activations of certain layer.
My previous code of getting the 1200-tensor activations btw:
rnn_encoder = learner.model[0];
rnn_encoder.eval()
rnn_encoder.reset()
def concat_pooling(rnn_encoder_output):
out, hidden = rnn_encoder_output
h = hidden[-1]
# h is with size 1*seq_len*400
max_pool = F.adaptive_max_pool1d(h.permute(0, 2, 1), (1,)).view(-1)
avg_pool = F.adaptive_avg_pool1d(h.permute(0, 2, 1), (1,)).view(-1)
cat = torch.cat([h[0][-1], max_pool, avg_pool]) # tensor of size (1200,)
return cat.data.cpu().numpy()
encoded = np.array([concat_pooling(rnn_encoder(V(T([encoding_np[i]])))) for i in range(len(encoding_np))])
If i understand correctly, you want the weights of a certain layer after training?
You can get the weights with list(learner.model.parameters()) and then choose the corresponding layer from the model. The order is the same as the model layers.
In your case, it should be list(learner.model.parameters())[-5]
Or do you want the output from an input up to that highlighted layer?
If so, see:
I suspect it has to do with register_forward_hook in lesson7-CAM, cause I just came across this lecture 2 days ago. You should try searching this forum for register_forward_hook
Note that " Important : This documentation covers fastai v2, which is a from-scratch rewrite of fastai. The v1 documentation has moved to fastai1.fast.ai."