I’m trying to make a simple passthrough CNN that just predicts the image it receives on input. I would assume that for a 3 channel image, 3 filters of size 1, stride 1 with no padding should be able to do the job perfectly but they are failing to do that (using L1 loss):
class PassthroughNet(nn.Module):
def __init__(self, ch_in, filters):
super().__init__()
self.conv_final = nn.Conv2d(ch_in, ch_in, kernel_size=1, stride=1, padding=0)
def forward(self, x):
x = self.conv_final(x)
return x
I was asking @dusan. But, this is a toy experiment to understand convolutions better. You may see how much time a CNN takes to learn a simple identity mapping of all 1 filters or see if it even ever reaches there perfectly. Or explore the optimization necessary for it to reach it. Purpose is experimentation.
Sorry for the late answer guys. I did not tinker with it much more so I am not sure what’s going on. Either some part of the input information is irretrievably lost or the net just goes in wrong direction during learning and gets stuck? Just my hypotheses, could be something else, more experiments needed? Dunno