Dealing with black and white masks for coloured images

So i am working on a forgery detection dataset( found at which is essentially a segmentation task. the images are coloured while the ground_truth images used are of a single channel(i.e black and white). So while using learn.fit_one_cycle(), there comes a runtime error saying ‘The size of tensor a (57344) must match the size of tensor b (50176) at non-singleton dimension 1’ which i am assuming to be a result of the mismatch in number of channels of the coloured images and the black and white ground_truth images. (Correct me if i am wrong please!!! im a total novice!!!)

Now my doubt is: When i open these ground_truth images using the function open_image(), the shape of the tensor shown is (3, x,y) and when i open the same with the function open_mask(), the shape displayed is (1,x,y).

So when using the function :
data=(SegmentationItemList.from_folder(img_path).split_by_rand_pct().label_from_func(get_y_fn, classes=range(256)).transform(size=224, tfm_y=True).databunch(bs=bs).normalize(imagenet_stats))

does the function, label_from_func(), read the masks in the folder as black and white( as they originally are) or colored?Whatever be the case, how do i remove this runtime error and make the number of channels in the actual and ground truth images consistent ?or is the error due to some other issue?

Any help is much appreciated. Thank you!