Getting Important Features of a CNN

Hi Team,
I have trained a Fastai model on certain images using Densenet201 model.
Now, I want to visualize the features learnt by a model on a sample image.
Eg: if i pass an image to a model then can it show me features which are important to the model.
like below example:
c

I am already having different layers output based on an image but not able to identify the important features out of them.
Is their any way we can do that?

@akshat8591 I’m sorry I don’t have the answer to your question, but it’s a very good one. Instead I have a different question:
Do you know the original source of this image?
I found this forum thread after doing a Google Image Search trying to find out how to credit the creator of this image (whoever they are).

EDIT: Found it! It’s from 2009. Lee et al, “Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations”, ICML 2009. e.g., https://www.researchgate.net/publication/221344904_Convolutional_deep_belief_networks_for_scalable_unsupervised_learning_of_hierarchical_representations

…Although Lee et al didn’t include the woman’s face on the left. Someone else must’ve added that in at some point over the last 11 years.

Hi, you could look into Grad-CAM (https://towardsdatascience.com/demystifying-convolutional-neural-networks-using-gradcam-554a85dd4e48), LIME, SHAP.

It is not as nice as the example you posted but it is a step.

IIRC it came from the course last year. This is indeed actually gradcam done many times on different parts of the image to look at the visualizations.