Greayscale Image Trains in Fastai

I have found something strange to. I want to train a single channel image classifier for satellite type data - eventually to be used with IR sat data. I have struggled to find enough IR sat data to do this so I wish to pretrain a greyscale model on the Planet dataset. Then, I wish to tune this model on my small IR dataset. Hopefully this will give some boost!

I have tweeted the Planet notebook to add only the following line of code:

from PIL import Image
for p in pathlib.Path('/root/.fastai/data/planet/train-jpg').iterdir():
  img =
  img = img.convert('L')

This converts the RGB to greyscale and saves it down in-place. I then run the rest of the notebook and despite there being only a single channel, where there should be 3-channels, the notebook seems to be running/training, albeit, slowly. It was my intuition that we are using a pretrained resnet34 in this first case, therefore, it shouldn’t run on the single channel.

How is this seemingly possible?

Look at the code for open_image. The default behavior is to open an image with PIL and convert to RGB form. This means your grayscale image is being converted to a 3 channel image with the same pixel intensities at each channel. You can stop this behavior by changing the convert_mode parameter on your ImageItemList.

You can also check what’s happening by looking at a batch of data. You can check the shapes of your image tensors and convert them back to images.

1 Like

Thank you for the excellent explanation @KarlH.

FYI - I realised that the slow training was due to not using the GPU!!