Using vgg with greyscale images

I haven’t see anyone try this. The color channel seems important for differentiating between a lot of the images in Imagenet. I tried a variation of this in that I trained a network on MNIST (which is 60k images in grayscale) and then tried transferring over the pre-trained results on a different task. In my experience it was not worth the effort, but perhaps because I had sufficient data. I think what probably makes more sense is converting your grayscale images to RGB and then using the pretrained imagenet weights as normal. The farther your images look from imagenet the less the pretrained networks weights seem to have value though. An alternative approach would be to take a parallel task that has a lot of data that looks more similar to the features of your data. For example an old Kaggle Competition on X-Rays or CT images probably has pretrained weights and networks lying around through github and their forums.

1 Like