Getting activations of certain layer after model is trained

Hi guys,

What would be the best way to get activations of penultimate dense layer of RNNLearner.classifier, i.e. the one I highlighted in bold below, after the model is trained:

(1): PoolingLinearClassifier(
(layers): Sequential(
(0): BatchNorm1d(1200, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): Dropout(p=0.04000000000000001)
(2): Linear(in_features=1200, out_features=50, bias=True)
(3): ReLU(inplace)
(4): BatchNorm1d(50, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Dropout(p=0.1)
(6): Linear(in_features=50, out_features=1, bias=True)
)

I used to pass the inputs directly to the trained model one by one, but it looks like there should be some easier and more efficient way to get the activations of certain layer.

My previous code of getting the 1200-tensor activations btw:
rnn_encoder = learner.model[0];
rnn_encoder.eval()
rnn_encoder.reset()

def concat_pooling(rnn_encoder_output):
    out, hidden = rnn_encoder_output
    h = hidden[-1]
    # h is with size 1*seq_len*400
    max_pool = F.adaptive_max_pool1d(h.permute(0, 2, 1), (1,)).view(-1)
    avg_pool = F.adaptive_avg_pool1d(h.permute(0, 2, 1), (1,)).view(-1)
    cat = torch.cat([h[0][-1], max_pool, avg_pool])    # tensor of size (1200,)
    return cat.data.cpu().numpy()

encoded = np.array([concat_pooling(rnn_encoder(V(T([encoding_np[i]])))) for i in range(len(encoding_np))])
1 Like

If i understand correctly, you want the weights of a certain layer after training?

You can get the weights with list(learner.model.parameters()) and then choose the corresponding layer from the model. The order is the same as the model layers.

In your case, it should be list(learner.model.parameters())[-5]

Or do you want the output from an input up to that highlighted layer?
If so, see:

I suspect it has to do with register_forward_hook in lesson7-CAM, cause I just came across this lecture 2 days ago. You should try searching this forum for register_forward_hook

I saw something posted from @krishnavishalv

‘’’

outputs= []
def hook(module, input, output):
    outputs.append(output)

res50_model = models.resnet50(pretrained=True)
res50_model.layer4[0].conv2.register_forward_hook(hook)
out = res50_model(res)
out = res50_model(res1)
print(outputs)

‘’’

You should use the ActivationStats callback or one of the other callbacks in the fastai hooks module.

http://docs.fast.ai/callbacks.hooks.html

1 Like

Hello @jeremy I am still not able to do it. Can you provide an example to do it?

Refer to Hooks in fastai documentation -

  1. Hooks-notebook
  2. Hooks-documentation

Also, wrote a sample code. See if it works for you.

layers_ = children(learn.model)[1].layers
act_means = [[] for _ in layers_]
act_stds = [[] for _ in layers_]
outputs = [[] for _ in layers_]

def append_stats(i, mod, inp, outp):

 if mod.training:
     act_means[i].append(outp.data.mean())
     act_stds [i].append(outp.data.std())
     outputs[i].append(outp)

for i,m in enumerate(layers_): m.register_forward_hook(partial(append_stats, i))

1 Like

Link doesn’t work anymore, unfortunately

It’s because docs.fast.ainow refers to v2.

Note that " Important : This documentation covers fastai v2, which is a from-scratch rewrite of fastai. The v1 documentation has moved to fastai1.fast.ai."

1 Like

Is there an update on v2?