Try to train grayscale image classification on imagenette dataset but something seems wrong

def get_dbunch_gray(size=160, bs=4, sh=0., workers=None):
    source = Path("~/data/imagenette-160")
    dblock = DataBlock(blocks=(ImageBlock(cls=PILImageBW), CategoryBlock),
                       get_items=get_image_files, get_y=parent_label)
    item_tfms=[RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
    batch_tfms=RandomErasing(p=0.3, max_count=3, sh=sh) if sh else None
    return dblock.databunch(source, path=source, bs=bs, num_workers=workers,
                            item_tfms=item_tfms, batch_tfms=batch_tfms)

dbunch_gray = get_dbunch_gray()

but the image shown is like the following,

it does not like grayscale image? What am I doing wrong?

The above code can be easily run in a jupyter notebook after downloading imagenette-160 dateset and change the path accordingly.


Try to explicitly specify colormap with cmap='Greys_r' for example.
Here _r stands for reversed.

1 Like


Thanks for the reply. I use dbunch_gray.show_batch(cmap='Greys_r') for display, the image seems right now.

So what is the problem come from?
Is is only a display problem or I need to do more on the image prepossessing?
Is this a bug of fastai2?

Best regards,

It’s only a display problem I believe. I don’t see anything wrong with your code.

1 Like

Note that there’s nothing that says black must be zero and white must be 255, or visa versa. Different domains make different choices here. So you have to use a color map to say what you want.