Current best practice for Unet_learner binary segmentation?

(Less ) #1

I’ve spent half a day and followed through the long unet binary classification thread and several others here on the board plus lots of testing in Jupyter, but I’m still hitting issues…so my question:

What is the current best practice for doing binary unet segmentation? Are we still suppposed to do the subclassing? ala:

class MySegmentationLabelList(SegmentationLabelList):
def open(self, fn): return open_mask(fn, div=True)

class MySegmentationItemList(ItemList):
"ItemList suitable for segmentation tasks."
_label_cls,_square_show_res = MySegmentationLabelList,False
src = (MySegmentationItemList(fnames)
.split_by_random_pct(.2)
.label_from_func(get_y_fn , classes=classes))

or can we now just use SegmentationItemList and push a div=True in there somewhere to ensure its a mask of 1/0?

I’m going to go browse the source code b/c there’s way too many various recommendations on the various threads and some are now obsolete (i.e. using ImageItemList) so I’m unclear what exactly is the current proper way to do the binary segmentation.

Any input would be appreciated!

1 Like

(Less ) #2

After reading the code and revisiting the various threads, this is how I’m doing my subclassing for now:

class BinaryLabelList(SegmentationLabelList):
    def open(self, fn): return open_mask(fn, div=True)

class BinaryItemList(SegmentationItemList):
    _label_cls = BinaryLabelList

and then creating the dataset as:
codes = np.array([‘background’,‘watch’]); codes

src = (BinaryItemList(fnames)
    .random_split_by_pct(.20)#.split_by_random_pct(.2)
    .label_from_func(get_y_fn , classes=codes))


tfms = get_transforms(flip_vert=True)


data = (src.transform(tfms, size=100, tfm_y=True)
        .databunch(bs=bs, num_workers=4)
        .normalize(imagenet_stats))
1 Like

(Ashutosh Raj) #3

How to visualise binary classification result of unet learner? My output is not showing anything.

0 Likes

(Nicholas Wickman) #4

Can you share the loss function you are using?

Unet is returning to me a 2-channel output with predictions for each class, which does not play nicely with all of the library loss functions. My target has only 1 channel. If I use basic cross-entropy my model simply predicts 0s for everything.

0 Likes

(xnet) #5

Does this work with dataframes? I applied all the fixes.

class SegLabelListCustom(SegmentationLabelList):
def open(self, fn): return open_mask(fn, div=True)

class SegItemListCustom(SegmentationItemList):
_label_cls = SegLabelListCustom

src = (SegItemListCustom.from_df(trnval_df, path=path, cols=‘filename_x’)
.split_none()
.label_from_df(cols=‘filename_y’))

I am getting:

~/anaconda3/envs//lib/python3.6/site-packages/fastai/data_block.py in process(self)
    529         "Process the inner datasets."
    530         xp,yp = self.get_processors()
--> 531         for ds,n in zip(self.lists, ['train','valid','test']): ds.process(xp, yp, name=n)
    532         #progress_bar clear the outputs so in some case warnings issued during processing disappear.
    533         for ds in self.lists:

~/anaconda3/envs//lib/python3.6/site-packages/fastai/data_block.py in process(self, xp, yp, name)
    694     def process(self, xp:PreProcessor=None, yp:PreProcessor=None, name:str=None):
    695         "Launch the processing on `self.x` and `self.y` with `xp` and `yp`."
--> 696         self.y.process(yp)
    697         if getattr(self.y, 'filter_missing_y', False):
    698             filt = array([o is None for o in self.y.items])

~/anaconda3/envs//lib/python3.6/site-packages/fastai/data_block.py in process(self, processor)
     81         if processor is not None: self.processor = processor
     82         self.processor = listify(self.processor)
---> 83         for p in self.processor: p.process(self)
     84         return self
     85 

~/anaconda3/envs//lib/python3.6/site-packages/fastai/vision/data.py in process(self, ds)
    370     "`PreProcessor` that stores the classes for segmentation."
    371     def __init__(self, ds:ItemList): self.classes = ds.classes
--> 372     def process(self, ds:ItemList):  ds.classes,ds.c = self.classes,len(self.classes)
    373 
    374 class SegmentationLabelList(ImageList):

TypeError: object of type 'NoneType' has no len()

If I call the code without .label_from_df(), it works, but I can’t get the labels. The cols in my df are basically the filepath to the images. My masks are 3-channel JPEG, binary mask with numbers 0 or 255.

0 Likes

#6

@LessW2020 : I just use SegmentationItemList e.g.

codes = ["background", "building"]
src = (SegmentationItemList.from_df(dataset_df, path=data_dir )
      . split_from_df(col="is_valid")
       .label_from_df(cols="label", classes=codes))
data = (src.transform(get_transforms(do_flip=True, 
             flip_vert=True, 
             max_rotate=180, 
             max_zoom=1.2, 
             max_lighting=0.5,
             max_warp=0.2, 
             p_affine=0.75, 
             p_lighting=0.75), size=size, tfm_y=True)
        .databunch(bs=bs)
        .normalize(imagenet_stats))

but you do have to process your masks to have 0 and 1s only (otherwise pytorch complains)

0 Likes

#7

you can try binary cross entropy (BCE) loss? (since it help to compensate for the sparse class instances e.g. you have a lot more 0s than 1s). Also I’ve seen people use a mixture of BCE + dice loss or Lovasz-Softmax loss (which optimises the IoU metric). My current experiment in segmenting buildings in satellite imagery I used BCE + lovasz-softmax and my dice accuracy about doubled from just what comes as default wiht Unet learner…

0 Likes

(Akshay Goel) #8

@wwymak Do you know of a good example where some used a mixture of BCE + dice loss with fast.ai?

@NicWick My UNet for a binary segmentation problem is also set up with two channels as output. I’m having trouble using loss functions with Dice due to this issue.

0 Likes

#9

I’ve used fastai with BCE + lovaz softmax here– you can more or less just substitute dice (or your other custom loss) for lovaz softmax in my combined_loss2 function.

3 Likes

(Akshay Goel) #10

Thank you so much for sharing this! @wwymak

Did you see significant improvements using this combined loss over the default?

0 Likes

#11

yeah, quite a bit (I can’t remember exactly how much, but definitely > 5% dice score)

0 Likes

(Akshay Goel) #12

@wwymak

I found an interesting tweak when using the lovasz loss function for a binary segmentation problem (0: background, 1: object). I was getting very buggy behavior initially, but modifying the first line to include - logits[:,0,:,:].float() helped a lot! I think it’s maybe because my Class 1 prediction is very sparse so many of the samples are just background and for these cases, there would essentially be no loss gradient on those cases with just background. Anyways! Hope someone finds this trick useful.

def combined_loss2(logits, labels):
    logits=logits[:,1,:,:].float() - logits[:,0,:,:].float()
    labels = labels.squeeze(1).float()
    
    lh_loss = lovasz_hinge_flat(*flatten_binary_scores2(logits, labels))
    bce_loss = F.binary_cross_entropy_with_logits(logits, labels)
    
    return 0.8 * bce_loss + lh_loss
1 Like

(Akshay Goel) #13

Actually, I am not sure if that modification makes a difference. For some reason, this loss function is not fitting the training data relative to BCE. (I tested it on a very small set and BCE fit it easily).

0 Likes

#14

you mean the logits=logits[:,1,:,:].float() - logits[:,0,:,:].float() tweak? or the lovaz-softmax?

0 Likes

(Akshay Goel) #15

The lovasz-softmax still hasn’t worked. I spent a few hours trying to get it to work looking at the original Github repo. Also referenced the version you shared! @wwymak

I finally ‘sanity-checked’ everything by running my model on a training set of ~100 images with default (BCE). This worked perfectly. But then trying lovasz had issues, and couldn’t fit to the training set.

The classification is a binary classification (Code 0 = Background, Code 1 = Class 1), and the data is imbalanced in that most of the pixels are the background. I am using a Resnet34.

1 Like