# Understanding the dice coefficient

Hello,
I’m trying to understand the implementation of the dice coefficient which is defined by:

I would like to know: What does this ∩ sign means?
Also I have an implementation of the function in Pytorch:

``````def dice_coeff(pred, target):
smooth = 1.
num = pred.size(0)
m1 = pred.view(num, -1)  # Flatten
m2 = target.view(num, -1)  # Flatten
intersection = (m1 * m2).sum()

return (2. * intersection + smooth) / (m1.sum() + m2.sum() + smooth)

class SoftDiceLoss(nn.Module):
def __init__(self, weight=None, size_average=True):
super(SoftDiceLoss, self).__init__()

def forward(self, logits, targets):
probs = F.sigmoid(logits)
num = targets.size(0)  # Number of batches

score = dice_coeff(probs, targets)
score = 1 - score.sum() / num
return score
``````

If I try to map the math formula to the code the ∩ sign is actually a multiplication and |X| is actually taking the sum of the matrice right?
One last question: The author used a “smooth” factor. Why do we want to do that?
Thank you.

The ∩ sign stands for set intersection, and |X| stands for the cardinality of the set X – basically the number of elements in the set.

This coefficient measures the similarity between sets X and Y. If the two sets are identical (i.e. they contain the same elements), the coefficient is equal to 1.0, while if X and Y have no elements in common, it is equal to 0.0. Otherwise it is somewhere in between.

4 Likes

The reason intersection is implemented as a multiplication and the cardinality as `sum()` is because `pred` and `target` are one-hot encoded vectors, i.e. vectors consisting of zeros or ones.

To be fair, since `pred` is computed using the logistic sigmoid, it could contain values between 0 and 1, although as the model becomes more certain of its predictions those “in between” values should disappear and go towards 0 or 1.

1 Like

Thanks a lot for this guys! It really helps. I really wonder how I could have known that by myself . Btw any idea about the smooth factor? Why would we want to add it? What is its purpose?
Thanks.

1 Like

otherwise you’d divide by zero if prediction and ground truth are empty

1 Like

It seems to me that most of the time when we add a term to avoid dividing by zero we choose a small value such as 1e-8 (in order not to modify the original expression too much). Why is smooth equal to 1 and not to a smaller value such as 1e-8?
Thanks.

1 Like

I think the smooth factor does more than just protect against division by zero (in which case it’s more generally called “epsilon” and only appears in the denominator). I’m no expert on the DICE coefficient but the smooth factor may be used here to literally make the function (or its derivative) more smooth.

1 Like

Thank you for your reply. It’s not the first time that I hear this put I don’t see how adding +1 in both numerator and denominator makes the function (or its derivative) more smooth. Can someone explain this to me?

@pietz @Ekami
I still don’t understand the smoothing part. How on earth will our denominator ever be zero? Summing our ground truth labels will always be greater than zero. That’s because we have a min of two classes labelled as 1 and 0.

The other possibility is that our numerator becomes zero, which I don’t see as any problem as we are trying to improve it to 1. I hope there is a better explanation to this.

In binary segmentation you only need one output per pixel. 0 = No - 1 = Yes. Therefore, an empty GT sample will only consist of 0s.

Suppose our arrays contain only 0s and 1s. When we take element-by-element product and then sum we’ll count only intersection of 1s (not 0s). It’s not an intersection of 2 arrays. We also compute cardinality as sum of 1s (not 0s again). Is it what we want?

I am wondering how we can implement Dice coeff for multi-class segmentation task? Any help would be appreciated.

1 Like

Not sure but i feel since we deal with prob scores, so if we customize it for one vs rest of the other may be it should work for multiclass

1 Like

Rather than using `smooth`, could we just do:

`if m1.sum() + m2.sum() == 0: return 1`, since the convention is to report a Dice score of 1 if both the target / prediction sets are empty?