Custom loss function (lovasz_loss) failing with tensor dimension mismatch - any tips?

I’m running binary segmentation and wanted to use the lovasz loss function to see if it would clean up the boundaries.

I just passed it in directly but it seems the dimensions sizes are off by a factor of 2 - any tips on how to resolve?
I re-ran with batch size=1and a debug print to simplify the issue and see the mismatch but not clear how to fix? (images are 256x256, resized)

torch.Size([1, 2, 256, 256]) logits loss
torch.Size([1, 256, 256]) labels.loss

error: The size of tensor a (131072) must match the size of tensor b (65536) at non-singleton dimension 0

Do I need to subclass from base_loss to use a new loss function?

forward function from the new loss:

def forward(self, logits, labels):
Binary Lovasz hinge loss
logits: [B, H, W] Variable, logits at each pixel (between -\infty and +\infty)
labels: [B, H, W] Tensor, binary ground truth masks (0 or 1)
per_image: compute the loss per image instead of per batch
ignore: void class id
print(logits.shape, “logits loss”)
print(labels.shape, “labels.loss”)

    if self.per_image:
        loss = mean(self.lovasz_hinge_flat(*flatten_binary_scores(log.unsqueeze(0), lab.unsqueeze(0), self.ignore))
                    for log, lab in zip(logits, labels))
        loss = self.lovasz_hinge_flat(*flatten_binary_scores(logits, labels, self.ignore))
    return loss


original error with bs=8:
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/ in call(self, *input, **kwargs)
531 else:
–> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():

~/unetseg/ in forward(self, logits, labels)
163 else:
–> 164 loss = self.lovasz_hinge_flat(*flatten_binary_scores(logits, labels, self.ignore))
165 return loss

~/unetseg/ in flatten_binary_scores(scores, labels, ignore)
180 valid = (labels != ignore)
–> 181 vscores = scores[valid]
182 vlabels = labels[valid]

IndexError: The shape of the mask [524288] at index 0 does not match the shape of the indexed tensor [1048576] at index 0

Thanks for any input!

My best advice (when dealing with debugging this type of stuff) is grab a batch of your ground truth and a batch of your y’s through a model (rawly via learn.model()) and then look at their shapes and what could be going wrong. Also I’d look into making sure that the output from our unet is actually what you should be expecting :slight_smile:

Most of the time I’ll literally go line by line in my loss function to make sure it’s all making sense.

To answer your question on the subclassing, no, you don’t.


great advice, thanks!
after some painstaking work, I finally uncovered the issue - the model wanted the actual classes to be 1 instead of 2 to do the right output. (Intuitively you think binary means two classes).


I am facing a similar problem, so I wonder whether you would want to share your code snippet of your solution here for us?

Many thanks in advance!