The source code of flatten_model is : flatten_model = lambda m: sum(map(flatten_model,m.children()),[]) if num_children(m) else [m]
And I found it by looking at the hooks.py in /fastai class HookCallback(LearnerCallback):
def on_train_begin(self, **kwargs):
if not self.modules:
self.modules = [m for m in flatten_model(self.learn.model)
if hasattr(m, 'weight')]
self.hooks = Hooks(self.modules, self.hook)
It indicates that if modules are not specified then choose all the modules which have weight in the model. So for yours interest, you just need to choose the last linear layer. I got familiar with these things by playing around with the ActivationStats and model_sizes in hooks.py
I think for getting just the activations, you donāt need callback. The hook_outputs is enough. Actually, I havenāt played with the callback yet still not familiar yet with the the method in callback.
Iām very appreciated if someone can give more examples of this topic too
Thanks Jeremy. Can i add the callback after the model have already trained ? Because in this case I train the model first then try to get the activation in the validation set
Hey, if useful I implemented Grad-CAM using fastai 1.0; allows to see which parts of the image are āusedā by the network to make a prediction. Itās a variation on CAM using averaging with gradients.
This might also be interesting to some of you because it has gradient/backward hooks as well as the output_hooks.
Thank you for your great and instructive notebook!
I incorporated it into my lung tissue pet-project.
While running the code several times I sometimes encountered a strange bug that I was able to fix by setting the model in evaluation mode with learn.model.eval().
Poor resolution is inevitable - the output of the ābottleneckā units in resnet are 7x7 feature mapsz so all the CAM methods are going to be constrained by this.
Iām not really sure about state of the art (and itās probably hard to define?) - especially for satellites, I did the PCA/tsne on the last later vectors and itās apparent that the network is distinguishing ātwisty roadsā from āgrid citiesā and ābig blocksā vs ālots of little housesā. Thereās also a huge color component; beiges and ocres from Mediterranean citiess, ā¦
So Iām thinking to try out a bunch of things and see what sticks
Hi - added Guided Backprop and Grad-CAM guided backprop to the notebook. Literally three lines of code using fastai Hooks . . .
The idea is to determine the importance of each pixel to the prediction that is made by the network. To avoid interference, the backprop is clipped to positive gradient contributions. This was introduced in Striving for Simplicity: the All Convolutional Net paper.
Example output:
Element-wise multiplication with grad-cam:
And here is the notebook, also showing what happens when you just plot the gradients, without doing guided backprop.
Iām not catching the exact role of global average pooling in CAM. If you read the paper, one of the major points the authors of CAM insist upon is the ability of Global Average Pooled CNNs to extract features and indentify objects even if they are not specifically trained to do so.
By GAP-CNNs they mean a CNNs which is bestowed with a GAP layer just between the last convolutional and the final fully connected classifier.
The equation to look upon is (2), since it defines every spatial element of the CAM. In this way they get a tensor which has the same dimensions of the last feature map outputted by the last conv layer. Then, typically one upsamples such map and visualize it in overlay with the original image Like @henripal did, to āsee what the cnn considers important for classifying such imageā.
My question is:
The authors insist a lot about the importance of Global Average Pooling, but in the end, the define their CAMs in a way that does not consider the GAP layer at all. Written as such, itās like the Dense layer is directly connected with the last conv. On the other hand, if plug in the GAP, you just obtain a vector whose dimesion corresponds to the number of classes, quite useless for visualization purposes.
Can someone shed a bit of light about that? Thanks!
@henripal@dhoa@MicPie
As mentioned in another thread, there is a nice general library that visualizes CNNs using many approaches, which allows for easy comparision (Keras/Tensorflow for now):
Since a couple of you seem to have had success hooking into PyTorch, maybe you could help make innvestigate work with PyTorch/fastai?