Adding losses from different tensor subclasses fails


I am currently working on an object detection task (the DetectionNetwork from this paper). This results in the network outputting the position and a classification if the object is in the image at all.
I use PointBlock and MultiCategoryBlock for the datablock. Adding the individual losses for these outputs to be able to call backward on the total loss results in an error:

TypeError: unsupported operand type(s) for +: 'TensorPoint' and 'TensorMultiCategory'

I found no way to add these without destroying the computational graph. Is there a way to cast it to the normal tensor class?

Minimal working example:

from import *
loss_1 = TensorMultiCategory(1.0, requires_grad=True)
loss_2 = TensorPoint(2.0, requires_grad=True)
loss = loss_1 + loss_2

You can thank pytorch for this :slight_smile: Cast them to TensorBase's

TensorBase(loss_1) (this should also keep the graph)


That did the trick, thank you very much! :slight_smile: