Visualizing intermediate layers a la Zeiler and Fergus

In lesson 1 Jeremy introduced the Zeiler and Fergus paper that visualizes intermediate layers to help us develop intuition for how the layers of CNNs progressively learn the building blocks needed for the classification task at hand. I was curious whether there’s a ready to use library for visualizing intermediate layers to help us beginners develop intuition for how CNNs work and when they fail to work with a set of images and classification task.

There have been multiple questions in the forums (most linked below), but I don’t see anything built into the fastai library yet. There is a Keras approach by @yashkatariya in the fast.ai community, but nothing in PyTorch. There’s Utku Ozbulak’s PyTorch github repository, which seems like it’ll be very useful, though I’m not ready for it yet myself. Also, creator of Keras Francois Chollet shared Keras code with similar functionality and wrote a post on How Convolutional Neural Networks See the World that may have some additional ideas worth exploring.

https://forums.fast.ai/t/how-to-visualize-different-layers-in-fcn/3552
https://forums.fast.ai/t/visualise-layers/27619
https://forums.fast.ai/t/wiki-fastai-library-feature-requests/7764/35
https://forums.fast.ai/t/getting-activations-of-certain-layer-after-model-is-trained/26561/2
https://forums.fast.ai/t/pytorch-best-way-to-get-at-intermediate-layers-in-vgg-and-resnet/5707/6
https://forums.fast.ai/t/things-i-dont-understand-while-visualizing-intermediate-layers/5697

21 Likes

Edit: See new post below for a working the single image prediction notebook.

After a coding session with @ramon about getting the activations with hooks I hacked together a small notebook (L1-stonefly_activations.ipynb) to visualize the different network layer activations:



To get the activations I used the following hook function (adapted from Jeremy):

class StoreHook(HookCallback):
    def on_train_begin(self, **kwargs):
        super().on_train_begin(**kwargs)
        self.acts = []
    def hook(self, m, i, o): return o
    def on_train_end(self, train, **kwargs): self.acts = self.hooks.stored

I am not sure if I used the hook correctly?
The image dimension in the results is strange, as I only have 34 images (the dataset has 3000+)?
I also could not figure out how to get the original image from the data loader to compare them to the activations.
Maybe there is a much easier way to get the activations?

The notebook above is based on my previous post and was inspired by the notebook from @KarlH (thank you, learned a lot!).

Kind regards
Michael

8 Likes

What is the purpose of m and i, since the function nevers uses them? Thanks!

Glad you found the notebook helpful.

The activations you get from the forward hook are generated every time you run something through the model, so you only have the activations for a single batch. When you run a new batch, the old forward hooks are replaced. I think that since you’re running the hook function as a callback, the activations you actually get out are the activations from the final batch of your validation dataset, which likely has 34 images in it.

I think you’ll find getting activations for specific images is easier if you do it outside the training loop. You can load a specific image or images of interest and just pass those. If you want multiple batches worth of activations you’ll have to loop through a dataloader and save activations for each batch. If you do this, remember to dump them to the CPU or you’ll run out of GPU memory real fast.

5 Likes

This worked like a charm. Thank you so much. The code is also readable. Really cannot thank you enough.

1 Like