Understanding CNN's kernel

Hi there, how are you doing?

I’m currently going through lesson 13 CNN. I have a quick question regarding to the kernel/feature filter of this network. I hope you could help me understand more about it.

My understanding of CNN so far is that the kernels values is randomly generated at the beginning of the training. It is then would be updated through training. Is it correct to say that the CNN model learn to “construct” the most effective feature maps/kernels so that important features of input could be extracted after convolution process?

On that same topic, let’s say 3 feature maps are randomly generated. Is it correct to say that it is possible for the 3 kernel to somehow end up with same exact values at the end of the training? If that was to happen, I assume that the output of each kernel would be exactly the same, which means the CNN model with 3 kernel would be just as effective as it is with 1 kernel?

Thank you

Hi TAG
So initially the kernel were manually coded so if we had three rows and three columns then the top row of ones and the other two of zeros, should find top edges. Let deep learning do its job and you can start with random numbers. Consider a silly example of a polar bear in a snow storm or a white canvas. You can imagine the kernels all ending up similar because every pixel would be white. Assume the bear has its eyes closed and nose covered in snow in my silly example.
pa
In which case zero bears, one bear and a sleuth (yes I looked that up) of bears would be indisguishable.

Regards Conwyn

I see. Thank you so much