In lesson 1 Jeremy introduced the Zeiler and Fergus paper that visualizes intermediate layers to help us develop intuition for how the layers of CNNs progressively learn the building blocks needed for the classification task at hand. I was curious whether there’s a ready to use library for visualizing intermediate layers to help us beginners develop intuition for how CNNs work and when they fail to work with a set of images and classification task.
There have been multiple questions in the forums (most linked below), but I don’t see anything built into the fastai
library yet. There is a Keras approach by @yashkatariya in the fast.ai community, but nothing in PyTorch. There’s Utku Ozbulak’s PyTorch github repository, which seems like it’ll be very useful, though I’m not ready for it yet myself. Also, creator of Keras Francois Chollet shared Keras code with similar functionality and wrote a post on How Convolutional Neural Networks See the World that may have some additional ideas worth exploring.
https://forums.fast.ai/t/how-to-visualize-different-layers-in-fcn/3552
https://forums.fast.ai/t/visualise-layers/27619
https://forums.fast.ai/t/wiki-fastai-library-feature-requests/7764/35
https://forums.fast.ai/t/getting-activations-of-certain-layer-after-model-is-trained/26561/2
https://forums.fast.ai/t/pytorch-best-way-to-get-at-intermediate-layers-in-vgg-and-resnet/5707/6
https://forums.fast.ai/t/things-i-dont-understand-while-visualizing-intermediate-layers/5697