Kaggle competition hosted by The Society for Imaging Informatics in Medicine (SIIM)

"The Society for Imaging Informatics in Medicine (SIIM) is the leading healthcare organization for those interested in the current and future use of informatics in medical imaging. Their mission is to advance medical imaging informatics across the enterprise through education, research, and innovation in a multi-disciplinary community. Today, they need your help.

In this competition, you’ll develop a model to classify (and if present, segment) pneumothorax from a set of chest radiographic images. If successful, you could aid in the early recognition of pneumothoraces and save lives."

9 Likes

We can use this thread to share resources and links that can help us learn more about such problems.

My money is on @alexandrecc, he is gonna win if he decides to compete in this competition.

1 Like

He posted about it on twitter, so I bet he will :smiley:

I have shared a fastai starter code :slight_smile:
https://www.kaggle.com/mnpinto/pneumothorax-fastai-starter-u-net-128x128

9 Likes

Yes I will participate because I really think that Kaggle is still one of the best way to learn and improve. First submission done tonight. The competition metric is somewhat surprising.

I hope to see many of you in the competition !

6 Likes

Thanks Miguel.

I tried transfer learning. Training my learner with images 64x64 pixel and finetuned with new data of 512x512 gave me a little worse result than submitting directly with the 64x64 model?

Hi Miguel,

I’m also in this competition. I created my data slightly differently than you:


get_y_fn = lambda x: path_masks/f'{x.name}'
codes=array(['Nothing', 'Pneumo'], dtype='<U17')

data = (SegmentationItemList.from_folder(path=path/'train')
        .split_by_rand_pct(0.2)
        .label_from_func(get_y_fn, classes=codes)
        .add_test((path/'test').ls(), label=None)
        .transform(get_transforms(), size=128, tfm_y=True)
        .databunch(path=Path('.'), bs=32)
        .normalize(imagenet_stats))

But the result is the same, there is two created classes, meaning that my UNet model will output 2 maps and will be trained with CrossEntropy. Do you think that this can hurt the model? In my opinion, we should rather have a single output map, being the predicted mask, thus be trained with BCE, but I haven’t found a way to do this.

Hi Nathan, you can add data.c = 1 after creating the databunch. Then in the learner set BCE as the loss function.

Thanks for this starter,
I will participate also

I am also competing. Is anyone also looking at the google image ones as well?

It’s too bad they all seem to end at the same time.

The blindness detection one also seems interesting and useful.