This my first post and I come with a question that might seem dumb to you but there is something I really don’t understand regarding CNN feature visualization.
You can often see (for example here https://github.com/utkuozbulak/pytorch-cnn-visualizations) codes to visualize CNN features that are based on the principle of “input optimization”:
- You load a pre-trained model and freeze all its weights.
- You create a random image.
- You feed it into the network and compute a loss that is actually minus the mean value of the feature map corresponding to the filter you want to visualize. So the more the filter activation is high, the smaller is the loss.
- You update the image thanks to this loss.
Let’s take a simple example. Here are the filter we are interested in and a random image.
If we compute the feature map, we get a single element feature map:
We see that to make the mean of the feature map the higher possible, we just have to choose the good signs for a, b, c and d and then go to infinity.
And this is actually the case if you choose a bigger image, you will just have to find the right signs and the input image will never converge.
I know there is something wrong in my reasoning as I guess this way of doing works, can you tell where is the issue ?