Hi,

I’m trying to use a custom loss function with some tabular data. My loss function is:

```
def myLossFunc(pred:Tensor, targ:Tensor):
params = torch.log10(1+targ)
pred_approx = torch.log10(1-pred)
to_sum = params*pred_approx
return to_sum.sum()
```

That is, my target is some coefficient between [-0.5;0.5], my y_range is [0;1] and I want to have the sum of the log(1+target)*log(1-prediction) as a loss function.

I load my data as follows (idx is some permuted array):

```
data = (TabularList.from_df(df, cont_names=['inputA','inputB',...])
.split_by_idx(idx[:3000])
.label_from_df(cols='targ_coeffecient').databunch())
```

and generate my model as follows:

```
learner = tabular_learner(data, layers=[100,100], y_range=torch.tensor([0, 1]), loss_func=myLossFunc)
```

However, when running fit_one_cycle, train_loss and valid_loss are always NaN. I don’t understand why. What is wrong with my loss function please? How to use custom loss functions with fast.ai please?

Thanks.