Need help with register_forward_hook

I am stuck trying to get intermediate activations from a columnar model using hooks.

This is what my model looks like

m = md.get_learner([], len(df.columns),
0, df.shape[1], [10000,10000,10000], [0], y_range=None,use_bn=True)

MixedInputModel(
(embs): ModuleList(
)
(lins): ModuleList(
(0): Linear(in_features=207, out_features=10000, bias=True)
(1): Linear(in_features=10000, out_features=10000, bias=True)
(2): Linear(in_features=10000, out_features=10000, bias=True)
)
(bns): ModuleList(
(0): BatchNorm1d(10000, eps=1e-05, momentum=0.1, affine=True)
(1): BatchNorm1d(10000, eps=1e-05, momentum=0.1, affine=True)
(2): BatchNorm1d(10000, eps=1e-05, momentum=0.1, affine=True)
)
(outp): Linear(in_features=10000, out_features=207, bias=True)
(emb_drop): Dropout(p=0)
(drops): ModuleList(
(0): Dropout(p=0)
)
(bn): BatchNorm1d(207, eps=1e-05, momentum=0.1, affine=True)
)

I run the following code:

outputs= []
def hook(module, input, output):
    print(1)
    
    outputs.append(output)

hk = m.model.lins[1].register_forward_hook(hook)

out = m.model(x_cat=Variable(next(iter(md.trn_dl))[0]),x_cont=Variable(next(iter(md.trn_dl))[1]))
hk.remove()

… and nothing happens with the hook. I do get the variable out populated with a torch.cuda.FloatTensor of size 2048x207.

Not sure what I’m missing. I am able to run the example from the Pytorch tutorial with no hurdles (other than adding Variable() when needed for Pytorch 0.3)
https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#forward-and-backward-function-hooks

I suspect something is not right when I call m.model.lins[1].register_forward_hook, but can’t figure it out.

Many thanks for your help!

An additional finding: I am able to add a hook to e.g. the (outp) layer (see list of layers above):

e.g.: hkk = m.model.outp.register_forward_hook(printnorm)

Now I need to figure out how to add a hook when dealing with a layer within ModuleList, if possible.

EDIT:

Second observation: my hooks work on m.model.bns[0] and m.model.lins[0], but not on m.model.bns[1] and m.model.lins[1]

I think I’ve solved my problem. If you look at my list of layers in my OP, my (drops) ModuleList only had layer (0), and was missing (1) and (2).

I’ve lost a few hours on this, but at least I learnt a few things in the process!