Creating custom ItemList for segmentation masks

Hi everyone!

I’m trying to tackle the datascience bowl 2018 to get some training in medical segmentation, and I’d like to do it with the fastai library. I’ve been able to work my way around it for now by creating custom databunches from constructor using custom dataloaders and datasets. Now I’d like to use the data_block API to make it cleaner, but there are a lot of things I don’t understand and I’m getting lost in the doc and the source codes.
The competition consists in detecting nuclei in cell images. To do that, we are given a training set that contains the original images, and for each of them mask images for every nucleus to be detected (there is one mask by nucleus). The data is organized such that the image with the id id can be found in id/images/id.png and the masks are all in the folder id/masks. We also have a csv with run-length encoding of the masks combined with the corresponding image id (one line by mask). However, I’d rather not use the rle encoding file as I compute the metric directly from the combined mask (sum of all nuclei masks). The evaluation metric is a custom mean_iou that you can get more information on here, but as it is quite complicated, I will not detail it. The important thing to note is that finding clear separation between nuclei is important. To begin with, my target on training is the combined mask (btw, I wonder if there is a way to train on targets of different sizes, like for instance if I want to train on nuclei centers and radius lists).
Now, here are my remarks/questions:

  • Images are obviously of varying sizes, so I decided to train on random crops. I therefore need to apply the exact same transformation to the mask image. How is it supposed to append with de data_block API ? I have a hard time understanding how and when in the pipeline are transforms applied. As I understand it I can pass a list of transforms to the ItemList, so if I give it a random cropping function, how can I make it apply the same transformation to an image and its mask but still have it change at every image ?
  • What does the c attribute correspond to in an ImageList ?
  • I guess here my label_cls is ImageList (or a custom MaskList)?
  • To get labels, I need to call label_from_func with a custom func of mine right (as I don’t have labels but masks)?
  • If I want to add a testset, I have another problem, as I use overlapping crops for prediction. I need to create a custom learner with an override of predict to do that ?

As I was writing this I answered some of the other questions I had myself, but in general I’m still quite lost on how to tackle this specific dataset using the data_block API (which is why this text is quite messy, I’m sorry). If you have any insights or ideas on how to do it, I would appreciate it a lot!

Thanks!

1 Like

You should start by checking this tutorial where there is an example of loading for a segmentation task. Since you have the masks that are the sums of nuclei, I think you can use SegmentationItemList (which is just an ImageList with default label_cls to SegmentMask).

To make sure the transforms are properly applied to your masks, you just need to pass tfm_y=True in the call to transforms.

1 Like

Thanks! My main concern is that random transforms will be the same for the image and the corresponding mask. Is that the case ? And do you have any insights about how I change the test-time behaviour of the learner ?

EDIT: Yeah I just noticed the answer to my transform question is in the tutorial, I shall read more carefully in the future. However I’m still not sure about something: even if I use SegmentationItemList, isn’t it supposed to load only one mask per image? Do I still need a custom implementation of SegmenMask to override the open function so that it sums over all masks in the folder?

I have another question about transforms: if I want to use a random transform from pytorch, for instance torchvision.transforms.RandomCrop, do I just need to pass something like RandTransform(Transform(RandomCrop(size))) (I know I have to convert to PIL image and back to tensors, but that is not my issue here)? Or do I actually need to get the functional version and then let the RandTransform compute the parameters on its own? I know that for cropping I can directly use the fastai version, but it is in case I want to do more bizarre things afterwards.

If you use torchvision transforms, it won’t apply the same thing to the target, so you should use fastai transforms (there is a random crop too). As for your other questions, yes you will need to write a custom open method to sum over the masks.

thanks a lot!

Hi ! :vulcan_salute:

I was going to post about the datascience bowl 2018 since I didn’t find many fastai enthusiasts on this kaggle, but since this has just been posted here, I will ask here.

I am also trying to use the datablock API, but I am struggling with how to input the data. I fooled around with panda dataframes, grabbing the images path list and fusionning the mask but that’s it. The segmentationitemlist function return me a ValueError: setting an array element with a sequence.

My notebook is available here : https://colab.research.google.com/drive/14qYCofJWwkU8XOMUXLWzq7SXpHNuW8Z1

I would also appreciate if someone have a working notebook with fastai for this kaggle.
Disclamer : I’m a biologist so quite the beginner at coding, be kind :slight_smile:

Thanks !

1 Like

Hi!

I managed to make it work, I am going to create a clean notebook and share it here as soon as it is finished (probably later today or tomorrow). Stay tuned!

3 Likes

Hi!, I’m working on the exact same thing right now, and this is how I am parsing out the input training images:

# train and test directories
path_img_train = base_dir + 'stage1_train/' # need to split this folder into train and val sets
path_img_test = base_dir + 'stage1_test/' # images only, use to test

# When we grab images from_folder, we also grab all of the masks
# we want to filter out all of the mask images
def filter_only_training_images(file_path):
    if Path(file_path).match('*/masks/*'):
        return False
    else:
        return True

# Create a segmentation list
itemList = (SegmentationItemList
            .from_folder(path_img_train)
            .filter_by_func(filter_only_training_images)
            .split_by_rand_pct())

itemList.items

This seems to be working well up to this point.

Now I’m actually working out how to parse the masks as labels for these inputs. If anyone can give any tips on this that would be great for me too! :slight_smile: There are multiple masks for this dataset that don’t overlap, and the approach I’m thinking through at the moment is how to combine these masks into one single mask so that I can use one of the label functions (ie label_from_func) in the data block api. Still haven’t figured this out yet…

Hi!

I’m moving away from fastai library for this particular work as I find it a bit limiting, but you can fin what I basically did to make it work in this notebook. It is a bit messy and I didn’t take the time to clean the outputs but it basically works. If you have any remarks, feel free of course !

2 Likes

Hi @florobax thanks for this! This will be useful for me :slight_smile: Just trying to get to a point where I can actually do a training run.

Can I ask what part of fastai you found limiting on this particular dataset?

Thanks for sharing your notebook @florobax!

It is so interesting to see how someone else attacked the problem
What score did you get in the end? The best Private score I got as 0.00227, which seems bad.

Was a mission, my first working kaggle submission

Things I found challenging were combining the masks, making the DataBunch and scaling the test images

I’ll try clean up my notebook and post it here as well.

Hi !
To answer @adeperio, I can’t find a good way to integrate my test pipeline with fastai as a first problem. Besides, everytime I need to add a custom part to it, I find myself losing 5 hours reading the doc and the source code, as some parts fell very unnatural to me. For instance, when I tried to implement additional transforms. Finally, some features are not handled by the api, or it is well hidden, like custom batch samplers. All in all, I find it more fitting to create my own mini library that will work exactly as I want it to.

As for @musedivision, I got up to 0.37503 with fastai (and 0.41547 without using the api but still using the one-cycle policy). However, I got very low results everytime I tried to use normalize() on the dataset. Which makes me think it should not be used on this dataset (besides, by exploring the source code of the winners, I can’t find a single mention of the word “normalize”, hence I guess it is useless). Good luck with this competition, I have been working on it for a month and I am still far from a standout result^^ If you’re interested, I created a repo with my source code (it is a work in progress, so still messy and more importantly not functional yet), but it could give you inspiration on some steps. To be more precise, branch master does work (it is my last version with fastai integration), while branch full_refactor is absolutely not finished for now. The code is not commented, so don’t hesitate to ask questions if you’re curious. I also encourage you to check githubs of top ranking teams once you are familiar with the basics, it helps get inspiration for what to go on with. Good luck!

2 Likes

Hi again!

If anyone reads this, I am a bit lost but don’t want to create another topic for this. So as I said I am trying to create my own mini-library for the data science bowl 2018, so I can fully customize it as I please. I am of course taking a lot of inspiration from fastai. However, now that I largely completed it, I cannot reproduce the results I was getting with fastai (it is in fact not even close). I am not sure where this comes from, and am currently investigating, but even by using fastai’s DynamicUnet, I can’t get it to converge nearly as well as with fastai library (loss would go around 0.05 within 10 epochs, now I can’t get it below 0.1, and accuracy would go up to 0.5 while now I have a hard time reaching 0.3). I recon it might be a problem with my training loop or my LR scheduler, but I can’t find where. If anyone (@sgugger ? :blush:) can take a look and give me some feedback, my code is here. It could also be a problem with the way I import my data, but I doubt it, as it is very similar to what I did before. Still, if you want to check, dataset code is here.

I’ll keep on looking for something fishy, but I’m afraid the fastai library is too well done to be equaled, and I might need to go back to it.

Anyway, thanks to those who’ll take a look!

It doesn’t seem you’re using true weight decay (though I don’t know what optimizer you’re using), that may be an explanation.

Check the code that I code for the Data Science Bowl 2018. I achieve 90.5 dice!

def open_mk(fn:PathOrStr, div:bool=False, convert_mode:str='L', cls:type=ImageSegment,
    after_open:Callable=None)->Image:
    "Return `Image` object created from image in file `fn`."
    #get_labels=lambda x: x.parent.parent/f'masks/'
    mask_files= get_image_files(fn)

    with warnings.catch_warnings():
        warnings.simplefilter("ignore", UserWarning) # EXIF warning from TiffPlugin
        masks=[]
        for file in mask_files:
            x = PIL.Image.open(file).convert('L')
            x = pil2tensor(x,np.float32)
            masks.append(x)
        mask=torch.cat(masks, dim=0)

    if after_open: x = after_open(x)
    num_masks = mask.shape[0]
    _,H,W = mask.shape
    labels = torch.zeros((H, W)) #.type(torch.uint8)
    for index in range(0, num_masks):
      labels[mask[index] > 0] =  1 
    if div: labels.div_(255)
    return cls(labels[None,:,:])

class NucleusSegmentationLabelList(SegmentationLabelList):
    def open(self,fn): return open_mk(fn)

class NucleusSegmentationItemList(ImageList):
    _label_cls= NucleusSegmentationLabelList

files_list=ItemList(train_files)

src=(NucleusSegmentationItemList(files_list)
     .split_by_rand_pct()
     .label_from_func(get_labels,classes=['void','nucleus']))

@sgugger I use Adam and pass weight decay to its constructor like opt = optim.Adam(unet.parameters(), lr=cfg.LRS[0], weight_decay=cfg.WD). IS there another step I am missing to make it work ? I thought opt.step() was enough to to take it into account.
@ingbiodanielh That’s quite a nice score! How much do you get on private LB ? I can’t get past .42 (and as of right now I can’t even get my new program to run correctly)

EDIT: I checked doc and source code for learner class and indeed I am not using true wd. I will try to implement it.

Well I am currently running a train with true weight decay (you can check my implementation in the same file as before, it is in the new method step), but it doesn’t look much better for now. Loss goes from .7 to .65 within 1 epoch, while it used to reach .12 after first epoch. I use exactly the same parameters and number of epochs, and I use the same DynamicUnet with resnet34 backbone. The only differences are my training loop and the way I initialize the nn, as I can’t use a learner here:

from fastai.vision.models import DynamicUnet
from fastai.vision.learner import create_body
from torchvision.models import resnet34

unet = DynamicUnet(create_body(resnet34, pretrained=False), 1)
device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
opt = optim.Adam(unet.parameters(), lr=cfg.LRS[0], weight_decay=cfg.WD, betas=(0.9, 0.99))
net = Net(unet, opt, nn.BCEWithLogitsLoss(), [mean_iou], cfg.MODELS_PATH)
scheduler = OneCycleScheduler(cfg.LRS, len(tl))
save_name = f'{cfg.MODEL}_fastai_{cfg.EPOCHS}_{cfg.LRS[0]}_{cfg.WD}'
save_name += f'_{getNextFilePath(cfg.MODELS_PATH, save_name)}'

net.fit(dls, cfg.EPOCHS, save_name, device, scheduler=scheduler)

I’ll keep on trying things and exploring fastai doc I guess.

The difference is actually present from the start. With fastai API, loss is at .35 when train begins, while it is over .7 with my method. Even with doing the exact same initialization as in unet_learner, it doesn’t work better…

EDIT: It is also not a problem with the inputs as I tried to run it with the dataloaders that come from my former databunch. It is probably somewhere in between dataloading and train beginning.

Hi @florobax sorry for late reply have been super busy on my end. Thanks for the context! Thats good to know.