Deconvolution issue

Hi @jeremy, when you explain the convolution, the picture of layer two is actually not parameters(filters), they are the deconvolution of the activation maps. A filter with 7x7 cannot show such rich information

1 Like

What does that mean?

@Kirito it is a way to visualize the filters. That’s all we need for lesson 1. We’ll get into the details of how later. :wink:

By deconvolution, i mean the reverse of the convolution operation, to recover the original picture(3 channels) from the activation map

1 Like

From the paper Visualizing and Understanding Convolutional Networks:

“For layers 2-5 we show the top
9 activations in a random subset of feature maps across the validation data, projected
down to pixel space using our deconvolutional network approach.”

1 Like

So, what you meant to say is basically, we are applying some sort of inverse function of what the learnt function is at that layer so as to get back the particular features of the images that the function at that layers learns for?

yes, maybe I am wrong. I believe Jeremy will explain it in the later classes.

okay, thanks.

I’ve moved this to a separate topic in the advanced category to avoid intimidating beginners :slight_smile:

1 Like

Yes, thank you Jeremy!

Deconvolution really is a misleading name, at least when we talk about CNNs and filters. As far as I know, we don’t really find the inverse function (thus, we don’t recover the original picture) we just apply the transpose.

Deconvolution is a misleading term. The more appropriate way to name it would be transposed convolution. Transposed convolution can be stated as consider that if you have a feature map of 3x3 size, then for transposed convolution you simply slide over a filter on one of the pixels of 3x3 feature map. Each single pixel of 3x3 feature map is multiplied with entire filter so as to obtain new value which is of the size of the filter. Now after obtaining all these grid they are arranged according to the placement of pixels which where present in original feature map. For more information on transpose convolution : Up-sampling with Transposed Convolution by Naoki Shibuya
For understanding in crude way I.e. the output size of transposed convolution: https://pytorch.org/docs/stable/nn.html?highlight=convtranspose2d#torch.nn.ConvTranspose2d

Not quite - we actually learn a new set of weights for the “transposed conv” which recover the original as best as possible.

I admit I was a bit way too general in my previous answer, and transposed convolutions can get new weights through training (eg. autoencoder networks, unet etc.) so they are definitely not just the transposed versions.
However, just to clarify, do you mean they are learned in Visualizing and Understanding Convolutional Networks as well, Jeremy? Originally I was talking about deconv regarding this publication (as it was mentioned above).