"The Society for Imaging Informatics in Medicine (SIIM) is the leading healthcare organization for those interested in the current and future use of informatics in medical imaging. Their mission is to advance medical imaging informatics across the enterprise through education, research, and innovation in a multi-disciplinary community. Today, they need your help.
In this competition, you’ll develop a model to classify (and if present, segment) pneumothorax from a set of chest radiographic images. If successful, you could aid in the early recognition of pneumothoraces and save lives."
Yes I will participate because I really think that Kaggle is still one of the best way to learn and improve. First submission done tonight. The competition metric is somewhat surprising.
I tried transfer learning. Training my learner with images 64x64 pixel and finetuned with new data of 512x512 gave me a little worse result than submitting directly with the 64x64 model?
But the result is the same, there is two created classes, meaning that my UNet model will output 2 maps and will be trained with CrossEntropy. Do you think that this can hurt the model? In my opinion, we should rather have a single output map, being the predicted mask, thus be trained with BCE, but I haven’t found a way to do this.