CNN - Do the channels that come out of each layer stay "separate" through the rest of the network?

In a CNN such as

simple_cnn = sequential(
    conv(1 ,4),            #14x14
    conv(4 ,8),            #7x7
    conv(8 ,16),           #4x4
    conv(16,32),           #2x2
    conv(32,2, act=False), #1x1
    Flatten(),
)

Do the channels that come out of each layer stay “separate” through the rest of the network; or does it vary by network; or do they generally get treated as a single array (196 element in the first output)

Perhaps I don’t understand your question, but in a convolutional layer, each output layer has a kernel for each input layer. For example, if the convolution has 2 input channels and one output channel, the single output channel would be created by summing the convolutions of the two input channels, with different kernel weights. So each output channel has a “view” into all input channels.

Yes; I was worried that I wasn’t phrasing the question very clearly; but … if I understand your answer … then I think it does address my intended question.

In the conv which is commented #7x7 - is it correct to say that each of the 8 output channels has a view into each of the 4 input channels?
Is the process the same as is described for the Color Images section in the linked lesson? The kernel for the xth output channel (of the eight) is passed over all input channels; and the results summed to create the value of the output channel?

Yes I also learned it from the linked lesson :slight_smile: If you read it carefully you’ll notice that the number of weights is channel_input * channel_output * kernel_size * kernel_size. This is because each output layer is created by summing the convolutions for each input layer, meaning that each output layer has a “view” into all input layers. Yes the results for an output layer are the sum of all the filters from all input layers (plus a bias, which is an additional weight/parameter).