Unet_binary segmentation

You can see my code above which helps solve the problem. I am custom defined the segmentationitemlist and segmentationlablelist to handle the issue. We need to pass div=True to open_mask function

@jeremy In above training cycle i got abnormally high validation error. Is it something similar to what you mentioned in the talk today? Reducing the training error kind of avoided the issue for me

Thanks, am using it - I realize that I assumed that there would be a hook like assert or some such added in the 1.024 version (vs still requiring redefining the function).

Hi, I had a similar issue with new changes and I little dig in the code went this way:

src = (SegmentationItemList.from_folder(train_128_path, True)
    .split_by_idx(valid_idx=range(4065,5087))
    .label_from_func(get_y_fn, classes=codes))
# changes open_mask for target value
src.train.y.create_func = partial(open_mask, div=True)
src.valid.y.create_func = partial(open_mask, div=True)
data = (src.transform(get_transforms(), size=size, tfm_y=True)
        .databunch(bs=bs)
        .normalize(imagenet_stats))

Hope this will help.

3 Likes

Hello, can you help be more specific please?

For the new API I am getting the same error with

data = (SegmentationItemList.from_folder(path_img,div=True) 
    .random_split_by_pct()
    .label_from_func(get_y_fn, classes=codes,div=True) #This method is primarly intended for inputs that are filenames, but could work in other settings.
    .transform(get_transforms(), tfm_y=True, size=128)
    .databunch())

I have passed in div in two places, for the images and masks yet this is happening. What is the fix now?

Fixed: I tried @Mirodil’s code and it gave me TypeError: ‘bool’ object is not callable , so use that code but pass div=True not just True in SegmentationItemList.from_folder.

My model is training but can view more than a few examples in the example viewer

what results do you get?
i have also obtained these empty images every now and then. i do not know what causes it, or how to fix it.

I can share the notebook, but basically my loss goes to about 0.25 but my valid and dice explode… What abt you?

my acc can be all over the place. my valid does explode as a function of learning rate

When I change the size, I get ‘ValueError: Expected target size (4, 10404), got torch.Size([4, 10201])’ …

INRIA?

Hi all,

I’ve now been back through this code, and for the life of me can’t get it to work. Specifically, tried to modify Carvana to work with FastAIV1.0 (v1.0.31), and still get the error:

RuntimeError: CUDA error: device-side assert triggered

This is after redefining SegmentationLabelList:

class SegmentationLabelList(ImageItemList):
def __init__(self, items:Iterator, classes:Collection=None, **kwargs):
    super().__init__(items, **kwargs)
    self.classes,self.loss_func,self.create_func = classes,CrossEntropyFlat(),partial(open_mask, div=True)
    self.c = len(self.classes)

def new(self, items, classes=None, **kwargs):
    return self.__class__(items, ifnone(classes, self.classes), **kwargs)

and then calling SegmentationItemList with div=True:

src = (SegmentationItemList.from_folder(train_128_path, div=True)
.split_by_idx(valid_idx=range(4065,5087))
.label_from_func(get_y_fn, classes=codes, div=True));

as well as tried setting open_masks div to True:

src.train.y.create_func = partial(open_mask, div=True)
src.valid.y.create_func = partial(open_mask, div=True)

Can’t help but feel that I’m missing something obvious here - if anyone in the thread has it working on v1.0.31 please do let me know how you managed it! @sgugger, any hints would be most appreciated…

This is also still unsolved. Changing image sizes should not remove images from training set. These segmentation bugs need fixing

I also encountered this problem. Waiting for a solution.

@jyoti3, @quodatlas,

Does your code still work with the current version of fast.ai ( 1.0.32 )

data.show_batch breaks because it shows black and white masks

Learner.create_unet is replaced by unet_learner and I encounter an error

AttributeError: ‘SegmentationItemList’ object has no attribute 'c’

I haven’t tried carvana with the last version of fastai. Note that we can’t help you with just

RuntimeError: CUDA error: device-side assert triggered

This is a generic cuda error due to a bad-index problem and you need to try one forward pass on the CPU to get more details on it.
We also can’t help you if you don’t give us the version of fastai you’re working with, since there has been a lot of changes as we refined the data block API (which is stabilized now).

As for using a customized open function for the masks (if you want to set div=True), just change (or subclass) the open method of SegmentationLabelList.

For a hopefully helpful reference, I’ve updated my binary segmentation notebook (for mapping buildings from aerial/drone imagery) to work on fastai v1.0.33:

https://nbviewer.jupyter.org/github/daveluo/zanzibar-aerial-mapping/blob/master/znz-segment-buildingfootprint-20181205-comboloss-rn34.ipynb

For those having issues changing SegmentationLabelList to open binary masks with div=True by default, this worked for me based on @sgugger’s suggestion:

class SegLabelListCustom(SegmentationLabelList):
    def open(self, fn): return open_mask(fn, div=True)
    
class SegItemListCustom(ImageItemList):
    _label_cls = SegLabelListCustom

src = (SegItemListCustom.from_folder(path_img)
        .split_by_idx(valid_idx)
        .label_from_func(get_y_fn, classes=codes))
...

In the notebook, I also add a custom loss function (combo of BCE and soft dice loss…not sure that my dice loss function is working entirely correctly yet so please let me know if you spot any bugs!) and make use of a slightly modified SaveModelCallback to auto-save and load weights from the best-resulting epoch.

11 Likes

I have same problem and ran it with cpu based pytorch and got this

RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes’ failed. at /opt/conda/conda-bld/pytorch-nightly-cpu_1544170178111/work/aten/src/THNN/generic/ClassNLLCriterion.c:93

Now what?

Sorry if this is a noob question but how do you implement this change? Do you go in the source code and update the library or just redefine the class in your own notebook?

You can just add the code block and run the cell in your own notebook. See the notebook link in my post for an example of this.

I am trying to implement the unet paper, but when concatenating the features from the contracting path the upsampled features, I noticed that in the case of the example in the paper, the features from the encoder are 64x64 and the upsampled features are 56x56.
My question is how do concatenate them, do you pad the upsampled features to 64x64 or do you crop the features from the encoder to 56x56.