Target input size miss match : strange

hi i get wierd error due to mismatch between network output that is decided by number classes and target size

Target size (torch.Size([48])) must be the same as input size (torch.Size([48, 5004]))

my target labels are unique 5004 in number . I assumed FAI would transform it into one hot codding to match with output of network but this is not happening.

I use soft margin loss .

This did trick for me.Except cross_entropylosss flatten version all other for multiclass classification might need unsqueeze of target. May be some fix needed in train code to ensure loss dont need custom implementation

def __init__(self):
        super().__init__()
        #self.alpha = alpha
        #self.mult = FocalLoss(2)
        #self.smooth=SmoothF2Loss()
        
        #self.weight= Variable( torch.Tensor([[1.0, 5.97, 2.89, 5.75, 4.64, 4.27, 5.46, 3.2, 14.48, 14.84, 15.14, 6.92, 6.86, 8.12, 6.32, 19.24, 8.48, 11.93, 7.32, 5.48, 11.99, 2.39, 6.3, 3.0, 12.06, 1.0, 10.39, 16.5] ]).float()).cuda(async=True)    
    def forward(self, input, target,reduction=None):
        #self.pos_weight= Variable( torch.Tensor([[10]]).float()).cuda(async=True) 
        #print(type(target))
        loss = F.soft_margin_loss(input,target.float().unsqueeze(-1))
        #torch.nn.MultiLabelSoftMarginLoss(input,target,weight=self.weight)
        return loss.float()
1 Like