Hi guys,

What would be the best way to get activations of penultimate dense layer of RNNLearner.classifier, i.e. the one I highlighted in bold below, after the model is trained:

(1): PoolingLinearClassifier(

(layers): Sequential(

(0): BatchNorm1d(1200, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

(1): Dropout(p=0.04000000000000001)

**(2): Linear(in_features=1200, out_features=50, bias=True)**

(3): ReLU(inplace)

(4): BatchNorm1d(50, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

(5): Dropout(p=0.1)

(6): Linear(in_features=50, out_features=1, bias=True)

)

I used to pass the inputs directly to the trained model one by one, but it looks like there should be some easier and more efficient way to get the activations of certain layer.

My previous code of getting the 1200-tensor activations btw:

rnn_encoder = learner.model[0];

rnn_encoder.eval()

rnn_encoder.reset()

```
def concat_pooling(rnn_encoder_output):
out, hidden = rnn_encoder_output
h = hidden[-1]
# h is with size 1*seq_len*400
max_pool = F.adaptive_max_pool1d(h.permute(0, 2, 1), (1,)).view(-1)
avg_pool = F.adaptive_avg_pool1d(h.permute(0, 2, 1), (1,)).view(-1)
cat = torch.cat([h[0][-1], max_pool, avg_pool]) # tensor of size (1200,)
return cat.data.cpu().numpy()
encoded = np.array([concat_pooling(rnn_encoder(V(T([encoding_np[i]])))) for i in range(len(encoding_np))])
```