How to make a ResNet do Black and White Images

Recently faced trying to tackle this. The solution is to change the first ConvLayer to where it’s ni is 1. To do so assume we have made an encoder via doing create_body (this example is a resnet34):

body = create_body(resnet34, pretrained=True)
body[0] = nn.Conv2d(1, 64, kernel_size=(7,7), stride=(2,2), padding=(3,3), bias=False)

From here we’d probably want to make this particular layer trainable :slight_smile: Hope this helps someone

1 Like

You also can convert and use the original weights, as Ross Wightman does in pytorch-image-models:

1 Like

It seems like you’d want to use the mean across the three input channels of the pretrained weights.

Is that what this code is doing?

OK, I’m working on B&W spectrograms and found that Resnet expects RGB so I added the single channel as two additional layers as shown by my programmatic checks of my images:

the shape of pix is:  (224, 224, 3)
image format:  PNG
image size:  (224, 224)
image mode:  RGB

I am getting what appear to be poor loss results from resnet18 and 34 and am not sure what you are saying here. create_body states:

"Cut off the body of a typically pretrained arch as determined by cut"

What would ever make me think that that would solve my problems? Can you point to a source that discusses this type of detail? The API doesn’t provide any information that is intelligible to me.