Recently as part of one of the Kaggle Competitions, I needed to build a custom loss function which calculates the “Pearson’s correlation coefficients”.
Here is this loss function code:
class Regress_Loss_1(torch.nn.Module): def __init__(self): super(Regress_Loss_1,self).__init__() def forward(self,x,y): x = input y = target vx = x - torch.mean(x) vy = y - torch.mean(y) cost = torch.sum(vx * vy) / (torch.sqrt(torch.sum(vx ** 2)) * torch.sqrt(torch.sum(vy ** 2))) loss = cost.mean() loss = loss*(-1) return loss
Now, after defining my learner, I am using this custom loss function to be used in my training as follows:
learn_tfidf.crit = Regress_Loss_1
While this does not give me any error, I wanted to check whether this is a right approach?