Sorry - my bad.
Best regards
EDIT:
I’ve found a solution thanks to:
I’ve modified the code and it looks like:
def hook(module, input, output):
outputs.append(output)
outputs = []
awd = learn.model
awd[1].layers[2].register_forward_hook(hook)
awd.reset()
with torch.no_grad():
awd.eval()(xb)
And the outputs from this layer are stored in ‘outputs’ list.
2 Likes