Hi, I have been thinking on heatmap method and I have the next idea. Do you think it is useful?
- Wouldn´t be interesting to multiply the last layer activation (512,11,11) by the weigths of the next linear layer (512) and then calculate the mean to get the heatmap (11,11) ? I think some weigths may be negative and then, those features not being very important for predicting certain class.
Hey have you taken a stab at this ?
I was thinking the same as I watched the lesson 6 video earlier today.
There could be high activations value that are not used in the output for a given class or low value multiplied by a big weight. So I think it could be interested to multiply the activations by the weights for a given class. The heatmap should be hot for the actual input class and cold for all other classes.
I tested it. It works pretty similar to heatmap in lesson 6, but in some images the heatmap is more accurate. Worth to try it.