Is there any way you can visualize cnn layers of any pre trained model without passing any custom image ? I need to visualise the layers which I have trained it on(my dataset). Not for a custom image. Just want to see how it has interpreted the images in my dataset. Thanks
The weights that the model has learned represent patterns. In the first layer these are patterns of pixels, in subsequent layers they are patterns of other patterns. You can visualize what these patterns are.
For example, you can synthesize an image for which a certain pattern gives the highest possible activation. That gives you an idea of what this particular pattern represents. (And when I say pattern here, I mean the weights corresponding to a single channel in a given layer, plus all the previous layers it depends on.)
Thanks @machinethink for the reply … could you show some code snippets about what should I exactly do (if possible any example notebook, I’ll get a better understanding). Thanks
This is similar to techniques used by DeepDream. You start with a randomized input image, then you optimize over the image based on the gradients from the model, where you boost the gradients coming from the layer that you want to examine and weaken the gradients from other layers. I don’t have any source code for this but that’s the general idea.
We should use hooks for visualization… but which layers should I pick … I used hook outputs for that but hook_outputs.stored had values of my test image that I used for prediction … not the pretrained weights. Any suggestions? @jeremy
Thanks.
Try searching for GradCam I think you need a thing similar to GradCam.