Unet_binary segmentation

With a double **, I edited my post.

1 Like

@sgugger here is my code used for segmentation. Weirdly i am getting very high validation loss in few epochs while training and dice accuracy are not bad. What could be the reason for this?

class SegmentationLabelList(ImageItemList):
    def __init__(self, items:Iterator, classes:Collection=None, **kwargs):
        super().__init__(items, **kwargs)
        self.classes,self.loss_func,self.create_func = classes,CrossEntropyFlat(),partial(open_mask, div=True)
        self.c = len(self.classes)

    def new(self, items, classes=None, **kwargs):
        return self.__class__(items, ifnone(classes, self.classes), **kwargs)

class SegmentationItemList(ImageItemList):
    def __post_init__(self):
        super().__post_init__()
        self._label_cls = SegmentationLabelList
src = (SegmentationItemList.from_folder(path_img)
   .random_split_by_pct(0.2)
   .label_from_func(get_y_fn, classes=codes))
data = (src.transform(get_transforms(), size=size, tfm_y=True)
    .databunch(bs=bs)
    .normalize(imagenet_stats))
def dice(input:Tensor, targs:Tensor, iou:bool=True)->Rank0Tensor:
    "Dice coefficient metric for binary target. If iou=True, returns iou metric, classic for segmentation problems."
    n = targs.shape[0]
    input = input.argmax(dim=1).view(n,-1)
    targs = targs.view(n,-1)
    intersect = (input*targs).sum().float()
    union = (input+targs).sum().float()
    if not iou: return 2. * intersect / union
    else: return intersect / (union-intersect+1.0)
learn = Learner.create_unet(data, models.resnet34, metrics=dice)
2 Likes

Lowering the learning rate solved the problem.

I have also been working on applying the camvid lesson to binary segment classification - specifically trying to implement carvana with fastaiV1. I am getting the cuda runtime error (59) : device-side assert triggered error and believe I need to add a fix for mask opening.

Since the newer API instead of Segment Dataset believe we now use SegmentationItemList which does not accept set_attr( ...) or have div=True as an option.

would love a pointer as to how to appropriately set the maskopener in the new setup to account for binary masks of 0 and 255?

For reference my src/data at the moment:

src = (SegmentationItemList.from_folder(train_128_path, True)
    .split_by_idx(valid_idx=range(4065,5087))
    .label_from_func(get_y_fn, classes=codes))

data = (src.transform(get_transforms(), size=size, tfm_y=True)
        .databunch(bs=bs)
        .normalize(imagenet_stats))
1 Like

You can see my code above which helps solve the problem. I am custom defined the segmentationitemlist and segmentationlablelist to handle the issue. We need to pass div=True to open_mask function

@jeremy In above training cycle i got abnormally high validation error. Is it something similar to what you mentioned in the talk today? Reducing the training error kind of avoided the issue for me

Thanks, am using it - I realize that I assumed that there would be a hook like assert or some such added in the 1.024 version (vs still requiring redefining the function).

Hi, I had a similar issue with new changes and I little dig in the code went this way:

src = (SegmentationItemList.from_folder(train_128_path, True)
    .split_by_idx(valid_idx=range(4065,5087))
    .label_from_func(get_y_fn, classes=codes))
# changes open_mask for target value
src.train.y.create_func = partial(open_mask, div=True)
src.valid.y.create_func = partial(open_mask, div=True)
data = (src.transform(get_transforms(), size=size, tfm_y=True)
        .databunch(bs=bs)
        .normalize(imagenet_stats))

Hope this will help.

3 Likes

Hello, can you help be more specific please?

For the new API I am getting the same error with

data = (SegmentationItemList.from_folder(path_img,div=True) 
    .random_split_by_pct()
    .label_from_func(get_y_fn, classes=codes,div=True) #This method is primarly intended for inputs that are filenames, but could work in other settings.
    .transform(get_transforms(), tfm_y=True, size=128)
    .databunch())

I have passed in div in two places, for the images and masks yet this is happening. What is the fix now?

Fixed: I tried @Mirodil’s code and it gave me TypeError: ‘bool’ object is not callable , so use that code but pass div=True not just True in SegmentationItemList.from_folder.

My model is training but can view more than a few examples in the example viewer

what results do you get?
i have also obtained these empty images every now and then. i do not know what causes it, or how to fix it.

I can share the notebook, but basically my loss goes to about 0.25 but my valid and dice explode… What abt you?

my acc can be all over the place. my valid does explode as a function of learning rate

When I change the size, I get ‘ValueError: Expected target size (4, 10404), got torch.Size([4, 10201])’ …

INRIA?

Hi all,

I’ve now been back through this code, and for the life of me can’t get it to work. Specifically, tried to modify Carvana to work with FastAIV1.0 (v1.0.31), and still get the error:

RuntimeError: CUDA error: device-side assert triggered

This is after redefining SegmentationLabelList:

class SegmentationLabelList(ImageItemList):
def __init__(self, items:Iterator, classes:Collection=None, **kwargs):
    super().__init__(items, **kwargs)
    self.classes,self.loss_func,self.create_func = classes,CrossEntropyFlat(),partial(open_mask, div=True)
    self.c = len(self.classes)

def new(self, items, classes=None, **kwargs):
    return self.__class__(items, ifnone(classes, self.classes), **kwargs)

and then calling SegmentationItemList with div=True:

src = (SegmentationItemList.from_folder(train_128_path, div=True)
.split_by_idx(valid_idx=range(4065,5087))
.label_from_func(get_y_fn, classes=codes, div=True));

as well as tried setting open_masks div to True:

src.train.y.create_func = partial(open_mask, div=True)
src.valid.y.create_func = partial(open_mask, div=True)

Can’t help but feel that I’m missing something obvious here - if anyone in the thread has it working on v1.0.31 please do let me know how you managed it! @sgugger, any hints would be most appreciated…

This is also still unsolved. Changing image sizes should not remove images from training set. These segmentation bugs need fixing

I also encountered this problem. Waiting for a solution.

@jyoti3, @quodatlas,

Does your code still work with the current version of fast.ai ( 1.0.32 )

data.show_batch breaks because it shows black and white masks

Learner.create_unet is replaced by unet_learner and I encounter an error

AttributeError: ‘SegmentationItemList’ object has no attribute 'c’

I haven’t tried carvana with the last version of fastai. Note that we can’t help you with just

RuntimeError: CUDA error: device-side assert triggered

This is a generic cuda error due to a bad-index problem and you need to try one forward pass on the CPU to get more details on it.
We also can’t help you if you don’t give us the version of fastai you’re working with, since there has been a lot of changes as we refined the data block API (which is stabilized now).

As for using a customized open function for the masks (if you want to set div=True), just change (or subclass) the open method of SegmentationLabelList.

For a hopefully helpful reference, I’ve updated my binary segmentation notebook (for mapping buildings from aerial/drone imagery) to work on fastai v1.0.33:

https://nbviewer.jupyter.org/github/daveluo/zanzibar-aerial-mapping/blob/master/znz-segment-buildingfootprint-20181205-comboloss-rn34.ipynb

For those having issues changing SegmentationLabelList to open binary masks with div=True by default, this worked for me based on @sgugger’s suggestion:

class SegLabelListCustom(SegmentationLabelList):
    def open(self, fn): return open_mask(fn, div=True)
    
class SegItemListCustom(ImageItemList):
    _label_cls = SegLabelListCustom

src = (SegItemListCustom.from_folder(path_img)
        .split_by_idx(valid_idx)
        .label_from_func(get_y_fn, classes=codes))
...

In the notebook, I also add a custom loss function (combo of BCE and soft dice loss…not sure that my dice loss function is working entirely correctly yet so please let me know if you spot any bugs!) and make use of a slightly modified SaveModelCallback to auto-save and load weights from the best-resulting epoch.

11 Likes