I am stuck trying to get intermediate activations from a columnar model using hooks.
This is what my model looks like
m = md.get_learner(, len(df.columns),
0, df.shape[1], [10000,10000,10000], [0], y_range=None,use_bn=True)
MixedInputModel(
(embs): ModuleList(
)
(lins): ModuleList(
(0): Linear(in_features=207, out_features=10000, bias=True)
(1): Linear(in_features=10000, out_features=10000, bias=True)
(2): Linear(in_features=10000, out_features=10000, bias=True)
)
(bns): ModuleList(
(0): BatchNorm1d(10000, eps=1e-05, momentum=0.1, affine=True)
(1): BatchNorm1d(10000, eps=1e-05, momentum=0.1, affine=True)
(2): BatchNorm1d(10000, eps=1e-05, momentum=0.1, affine=True)
)
(outp): Linear(in_features=10000, out_features=207, bias=True)
(emb_drop): Dropout(p=0)
(drops): ModuleList(
(0): Dropout(p=0)
)
(bn): BatchNorm1d(207, eps=1e-05, momentum=0.1, affine=True)
)
I run the following code:
outputs= []
def hook(module, input, output):
print(1)
outputs.append(output)
hk = m.model.lins[1].register_forward_hook(hook)
out = m.model(x_cat=Variable(next(iter(md.trn_dl))[0]),x_cont=Variable(next(iter(md.trn_dl))[1]))
hk.remove()
… and nothing happens with the hook. I do get the variable out populated with a torch.cuda.FloatTensor of size 2048x207.
Not sure what I’m missing. I am able to run the example from the Pytorch tutorial with no hurdles (other than adding Variable() when needed for Pytorch 0.3)
https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html#forward-and-backward-function-hooks
I suspect something is not right when I call m.model.lins[1].register_forward_hook, but can’t figure it out.
Many thanks for your help!