A very strange behavior of Jaccard loss

For image segmentation problems, I usually train with Jaccard loss instead of pixel-wise cross entropy as the former directly optimizes the evaluation metric (IoU). The Jaccard loss is defiend as below:

def jaccard_loss(x, y, ignore_index, num_classes):
p = F.softmax(x, dim=1)
m = y != ignore_index
t = F.one_hot((y * m).long(), num_classes=num_classes).byte().permute(0, 3, 1, 2)
i = (p * (t * m.unsqueeze(1)).float()).sum((0,2,3))
u = ((p + t.float()) * m.unsqueeze(1).float()).sum((0,2,3)) - i
v = u.nonzero()
return 1 - (i[v] / u[v]).mean()

I trained a ResNet18-UNet with this loss and the result is better than using cross entropy. Here comes the strange part. When I switch to ResNet34-UNet, the training loss and the validation loss are both lower than the ones of ResNet18-UNet, which is expected. However, the corresponding IoU is significantly lower. This doesn’t make any sense because a low loss should correspond to a higher IoU. I’m pretty sure the evaluation script is correct because it is from the evaluation server. The loss code should be good too because it works pretty well for ResNet18-UNet. Another thing may or may not relate to this strange behavior is that when I trained ResNet18-UNet I could use a learning rate of 2e-3 but I can only use a learning rate of 8e-5 for ResNet34-UNet. Does anyone have some explanation for this strange behavior?

1 Like