Modifying loss affects predictions even when not training with that loss?!

Hi all,
A question that came up when tried working with a pretrained unet_learner, with binary segmentation data (gray scale image + binary mask).
The predictions I get per pixel are all well situated between 0 and 1. But when I replace the “learn.loss_func” with some custom function, I get values outside of 0-1 (typically -100 to 100). Both cases occur even before performing any training (leaving that model strictly pre-trained).
How is it that simply modifying the loss function directly changes the predictions, without me performing any training?
To be clear, all I do is:

  1. learn = unet_learner(data, models.resnet34, metrics=[dice])
  2. optionally redefine the learn.loss_func
  3. preds, ys = learn.get_preds(DatasetType.Train)
  4. check out preds.max(), preds.min()

Thanks in advance!

HI !

Basically, it is because your model has no final activation by default. It is later appended to your model when you get the predictions. If you take a look at the basic_train.py, you will see a list of common loss functions and the corresponding activation that fastai uses for you.

So I suspect that when you use a custom loss function, you have no final activation anymore, leading to those kind of values.

1 Like

I see it now.
Thanks a lot for your help!
Could you tell what is the elegant way of fixing a certain final activation function? (until I get my head around the fastai code)

You should pass your predictions to the activation function you need (softmax, sigmoid…) manually.

2 Likes

Got it. Many thanks :slight_smile: