very happy to know that my code is useful.
The source code of flatten_model is : flatten_model = lambda m: sum(map(flatten_model,m.children()),[]) if num_children(m) else [m]
And I found it by looking at the hooks.py in /fastai class HookCallback(LearnerCallback):
def on_train_begin(self, **kwargs):
if not self.modules:
self.modules = [m for m in flatten_model(self.learn.model)
if hasattr(m, 'weight')]
self.hooks = Hooks(self.modules, self.hook)
It indicates that if modules are not specified then choose all the modules which have weight in the model. So for yours interest, you just need to choose the last linear layer. I got familiar with these things by playing around with the ActivationStats and model_sizes in hooks.py
I think for getting just the activations, you donāt need callback. The hook_outputs is enough. Actually, I havenāt played with the callback yet still not familiar yet with the the method in callback.
Iām very appreciated if someone can give more examples of this topic too