Segmentation Dice Loss for Background Class Images (no TP, FP, FN) Possible?

aI’ve been training U-Net models with a regular dataset, and one augmented with images that only has background classes.

For those situation with background class image, by definition there can be no TP. The ideal situation is the model predicts TP = 0, FP = 0, TN = 0, and FN = every pixel. If this ideal situation is achieved, the dice loss goes to zero.

BUT, if there’s even a single FP or FN, the dice loss goes to the max value. So there is no ability for the model to learn how to classify every pixel as background pixels. So when I train with images with non-background pixels and images with background pixels only… the loss gets dominated by this effect. Is there a workaround for this?