I’ve spent half a day and followed through the long unet binary classification thread and several others here on the board plus lots of testing in Jupyter, but I’m still hitting issues…so my question:
What is the current best practice for doing binary unet segmentation? Are we still suppposed to do the subclassing? ala:
class MySegmentationLabelList(SegmentationLabelList):
def open(self, fn): return open_mask(fn, div=True)
class MySegmentationItemList(ItemList):
"ItemList suitable for segmentation tasks."
_label_cls,_square_show_res = MySegmentationLabelList,False
src = (MySegmentationItemList(fnames)
.split_by_random_pct(.2)
.label_from_func(get_y_fn , classes=classes))
or can we now just use SegmentationItemList and push a div=True in there somewhere to ensure its a mask of 1/0?
I’m going to go browse the source code b/c there’s way too many various recommendations on the various threads and some are now obsolete (i.e. using ImageItemList) so I’m unclear what exactly is the current proper way to do the binary segmentation.
Unet is returning to me a 2-channel output with predictions for each class, which does not play nicely with all of the library loss functions. My target has only 1 channel. If I use basic cross-entropy my model simply predicts 0s for everything.
~/anaconda3/envs//lib/python3.6/site-packages/fastai/data_block.py in process(self)
529 "Process the inner datasets."
530 xp,yp = self.get_processors()
--> 531 for ds,n in zip(self.lists, ['train','valid','test']): ds.process(xp, yp, name=n)
532 #progress_bar clear the outputs so in some case warnings issued during processing disappear.
533 for ds in self.lists:
~/anaconda3/envs//lib/python3.6/site-packages/fastai/data_block.py in process(self, xp, yp, name)
694 def process(self, xp:PreProcessor=None, yp:PreProcessor=None, name:str=None):
695 "Launch the processing on `self.x` and `self.y` with `xp` and `yp`."
--> 696 self.y.process(yp)
697 if getattr(self.y, 'filter_missing_y', False):
698 filt = array([o is None for o in self.y.items])
~/anaconda3/envs//lib/python3.6/site-packages/fastai/data_block.py in process(self, processor)
81 if processor is not None: self.processor = processor
82 self.processor = listify(self.processor)
---> 83 for p in self.processor: p.process(self)
84 return self
85
~/anaconda3/envs//lib/python3.6/site-packages/fastai/vision/data.py in process(self, ds)
370 "`PreProcessor` that stores the classes for segmentation."
371 def __init__(self, ds:ItemList): self.classes = ds.classes
--> 372 def process(self, ds:ItemList): ds.classes,ds.c = self.classes,len(self.classes)
373
374 class SegmentationLabelList(ImageList):
TypeError: object of type 'NoneType' has no len()
If I call the code without .label_from_df(), it works, but I can’t get the labels. The cols in my df are basically the filepath to the images. My masks are 3-channel JPEG, binary mask with numbers 0 or 255.
you can try binary cross entropy (BCE) loss? (since it help to compensate for the sparse class instances e.g. you have a lot more 0s than 1s). Also I’ve seen people use a mixture of BCE + dice loss or Lovasz-Softmax loss (which optimises the IoU metric). My current experiment in segmenting buildings in satellite imagery I used BCE + lovasz-softmax and my dice accuracy about doubled from just what comes as default wiht Unet learner…
@wwymak Do you know of a good example where some used a mixture of BCE + dice loss with fast.ai?
@NicWick My UNet for a binary segmentation problem is also set up with two channels as output. I’m having trouble using loss functions with Dice due to this issue.
I’ve used fastai with BCE + lovaz softmax here– you can more or less just substitute dice (or your other custom loss) for lovaz softmax in my combined_loss2 function.
I found an interesting tweak when using the lovasz loss function for a binary segmentation problem (0: background, 1: object). I was getting very buggy behavior initially, but modifying the first line to include - logits[:,0,:,:].float() helped a lot! I think it’s maybe because my Class 1 prediction is very sparse so many of the samples are just background and for these cases, there would essentially be no loss gradient on those cases with just background. Anyways! Hope someone finds this trick useful.
Actually, I am not sure if that modification makes a difference. For some reason, this loss function is not fitting the training data relative to BCE. (I tested it on a very small set and BCE fit it easily).
The lovasz-softmax still hasn’t worked. I spent a few hours trying to get it to work looking at the original Github repo. Also referenced the version you shared! @wwymak
I finally ‘sanity-checked’ everything by running my model on a training set of ~100 images with default (BCE). This worked perfectly. But then trying lovasz had issues, and couldn’t fit to the training set.
The classification is a binary classification (Code 0 = Background, Code 1 = Class 1), and the data is imbalanced in that most of the pixels are the background. I am using a Resnet34.
How do I go about getting a printout of IOU for binary segmentation along with the Dice score? I have tried:
learn = unet_learner(data, models.resnet34, metrics=[dice(True)])
Thanks Julian. I no longer get an error, and the value does seem to align with what I expect the iou to be. However, the heading of the output still says ‘dice’. That’s to be expected correct? (As opposed to the title of the column actually being ‘iou’).
Yeah it uses function name. For a partial you can change it using partialFunc.name = “dice iou”. Or just define a custom function with name you want and call dice(*args, iou=True) in it