Error assigning Loss Function to Segmentation unet_learner

=== Software === 
python        : 3.7.6
fastai        : 1.0.57
fastprogress  : 0.1.21
torch         : 1.4.0
nvidia driver : 441.22
torch cuda    : 10.1 / is available
torch cudnn   : 7501 / is enabled

=== Hardware === 
nvidia gpus   : 2
torch devices : 2
  - gpu0      : 8129MB | Tesla M60
  - gpu1      : 8129MB | Tesla M60

=== Environment === 
platform      : Windows-10-10.0.14393-SP0
conda env     : ml
python        : C:\Users\-sysop-dur7an-z\.conda\envs\ml\python.exe

Describe the bug

When I train a learner learn = unet_learner(data ,models.resnet50, metrics=metrics) with the default Loss function (FlattenedLoss), I have no error and the model trains fine. However, when I try to use another loss function, (e.g. learn.loss_func = nn.BCEWithLogitsLoss() ), I get the following error:

ValueError: Target size (torch.Size([2, 1, 64, 128])) must be the same as input size (torch.Size([2, 2, 64, 128]))

Do you know why this is?

Provide your installation details

To Reproduce

learn = unet_learner(data ,models.resnet50, metrics=metrics)
learn.loss_fn = nn.BCEWithLogitsLoss()

lr_find(learn)

ValueError: Target size (torch.Size([2, 1, 64, 128])) must be the same as input size (torch.Size([2, 2, 64, 128]))

BCEWithLogitsLoss is expecting a channel for each class, in this case 2 while your target has a single channel.

@rpdunne did you find a solution for this?@juvian i’m in a similar position, i think its because im trying to use the original camvid notebook modified for a binary classification problem. It’s working ok in terms of initial results however i wanted to change my loss function and now im running into a similar headache.

Should i be trying to change the "target = " part of the code or addressing this elsewhere?

You should either set classes to 2 so that it works out of the box or use a custom loss function to get just first channel and apply the loss you want

Thanks @juvian - out of the box sounds much more in line with what i want to do. My next stupid question is where are you setting the classes to 2? I thought i already had done that implicitly by only having two classes in the code

i.e

Check what’s the value of data.c

data.c

2 @juvian

I also found another function to try and debug from below, now i’m wondering is this a case that i’m not passing the masks in too? Or that my masks are greyscaled and it’s expecting RGB?

Make a custom loss function that does something like this:

Def customloss(input, target)
Input = input[:, 0,:,:]
Return nn.bce()(input, target)

if you are using pretrained model (3 channel) it should be rgb! examples available if you search the forums.

Actually, it is possible to load gray valued images as 3-channel images by duplicating the one channel. This way pre-trained image net can be used (and also enhances results).