I’m working on a tabular learning model to predict a float value. The target values include both positive and negative numbers. I’d like to implement a custom loss function that applies a greater loss when the predicted value is a different sign from the target. Here’s what I have so far, but this does not work:
import fastai.tabular.all as fta
def Sign(x):
if x < 0:
return -1
elif x > 0:
return 1
return 0
def CustomLoss(y, yhat):
y, yhat = [float(x[0]) for x in list(y)], [float(x[0]) for x in list(yhat)]
loss = 0
for i in range(len(y)):
a, b = y[i], yhat[i]
if Sign(a) != Sign(b):
loss += 5*(a - b)**2
else:
loss += (a - b)**2
return loss
learn = fta.tabular_learner(dls, metrics=fta.mae, loss_func = CustomLoss)
it is encoded as a Torch Loss Function (but the bahvior should be rather similar). The loss function accepts self, input values (here: y) and target values (here yhat) - so it might even be that I swapped the y and yhat, please be aware of that!
.view can be interpreted as a reshaping, so both entities are reshaped to have the same form.
The residuals are your localized a-b but in vector form.
now, to simplify the next part unwrapping it a little may be useful