Pixelwise weighted loss

So I have been tackling this problem for over two weeks now. I am trying to implement a pixel-wise weighted loss function and therefore created a 2 dimensional TIF file that I feed into my fastai pipeline as a label. This way I wanted to ensure that both the mask and the corresponding weights are being augmented simultaneously. I defined following segmentation classes to open and handle this type of input:

class CustomSegmentationItemList(ImageList):
  _label_cls,_square_show_res = CustomSegmentationLabelList,False

class CustomSegmentationLabelList(ImageList):
  _processor= vision.data.SegmentationProcessor
  def __init__(self, items:Iterator,classes:Collection=None,**kwargs):
    self.classes,self.loss_func = classes,CrossEntropyFlat(axis=1)
  def open(self, fn): return open_custom_mask(fn,after_open = self.after_open)

 def open_custom_mask(fn:PathOrStr, after_open:Callable=None)->ImageSegment:
     x = io.imread(fn)
     if after_open: x = after_open(x)
     x = pil2tensor(x,np.float32) 
     return Image(x)

Calling the open_custom_mask function gives me the desired two dimenstional Image file. When invoking my data object and looking at it, my y’s are suddenly converted into three dimensional Image files with each layer containing the same values. I have been going through all files a dozen times now, but I cannot find, where fastai does these transforms. This is how I create my data object:

data = (CustomSegmentationItemList.from_df(img_df,IMG_PATH, convert_mode='L')
      .label_from_func(get_mask, classes = myclasses)
      .transform(tfms=None, tfm_y=True, size=TILE_SHAPE)

Also would you recommend doing it this way or should I rather create a custom ItemBase with two Image files for the labels?