Creating your own loss function

It may not be as simple as this, but when I look at the layers documentation I see this for MSELossFlat:

class FlattenedLoss():
    "Same as `func`, but flattens input and target."
    def __init__(self, func, *args, axis:int=-1, floatify:bool=False, is_2d:bool=True, **kwargs):
        self.func,self.axis,self.floatify,self.is_2d = func(*args,**kwargs),axis,floatify,is_2d

    def __repr__(self): return f"FlattenedLoss of {self.func}"
    @property
    def reduction(self): return self.func.reduction
    @reduction.setter
    def reduction(self, v): self.func.reduction = v

    def __call__(self, input:Tensor, target:Tensor, **kwargs)->Rank0Tensor:
        input = input.transpose(self.axis,-1).contiguous()
        target = target.transpose(self.axis,-1).contiguous()
        if self.floatify: target = target.float()
        input = input.view(-1,input.shape[-1]) if self.is_2d else input.view(-1)
        return self.func.__call__(input, target.view(-1), **kwargs)


def MSELossFlat(*args, axis:int=-1, floatify:bool=True, **kwargs):
    "Same as `nn.MSELoss`, but flattens input and target."
    return FlattenedLoss(nn.MSELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)

If I wanted to use MAE loss instead would it be as simple as this?

def MAELossFlat(*args, axis:int=-1, floatify:bool=True, **kwargs):
“Same as nn.L1Loss, but flattens input and target.”
return FlattenedLoss(nn.L1Loss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)

and if it is that simple, where do I make that change to add it? From looking at the layers.py file it appears to be part of the source code.

1 Like

Hi Nick.

  1. Define your MAELossFlat in the notebook.

  2. The create learner methods automatically choose an appropriate loss function for you. Just replace it with
    learn.loss_func = MAELossFlat

Then train.

1 Like

I’ve created the mae function as:

def _error(actual: np.ndarray, predicted: np.ndarray):
“”" Simple error “”"
return actual - predicted

def mae(actual: np.ndarray, predicted: np.ndarray):
“”" Mean Absolute Error “”"
return np.mean(np.abs(_error(actual, predicted)))

Then I train it as:

learn = tabular_learner(dbunch, layers=[1000,500], loss_func = mae,
config=tabular_config(ps=[0.001,0.01], embed_p=0.04, y_range=y_range),
metrics=mae)

But got the error:

RuntimeError: Can’t call numpy() on Variable that requires grad. Use var.detach().numpy() instead.

Anything I’ve done wrong?

When your loss function gets its values they come as tensors. You should do what it says and detach and convert it to a jumpy before any mathematics

1 Like

Many thanks.

Add “with torch.no_grad():” before the math function seems solve the problem.

1 Like

fastai is based on PyTorch, which has differentiation and backpropagation built in. AFAIK, you must use PyTorch loss functions to have this happen automatically. Numpy is not PyTorch and will not do these things. Check out the PyTorch docs for L1Loss - you do not have to build mae yourself.

@Pomo Many thanks. I am quite new to fastai, and I’ve overhought. Simply calling nn.L1Loss directly in the code is a better way to do it.

No problem. It takes time and practice to learn how the various parts work together.