Konica Minolta Pathological Image Segmentation Challenge

Just came across this challenge on topcoder, http://crowdsourcing.topcoder.com/KonicaMinoltaChallenge . This is an image segmentation problem and participation seems to very low, 35 participants so far, a week to end. I would like to know if anyone interested in working on this challenge, share some startup code to encourage new participants.

@shinto, thanks for the tip!

I entered the competition in the last few days before the deadline but I didn’t had time to get really far and barely beat the baseline of 750k. I am interested in knowing how some competitors made it in the 800k territory (at the exception of trying to overfit the test set…).

My solution was a modified version of https://github.com/jocicmarko/ultrasound-nerve-segmentation Just changed it to accept RGB channels and higher resolution.
My CV scores were above 800k and leaderboard were around 490k. When it figured out that I was submitting image masks as upside down it was too late. My final submission scored 713k with 50 epochs of training.

:slight_smile: I had the same issues: problem of rotation when submitting and a CV score far from the leaderboard score.

My best submission at 771k is a 4 tiles 256 x 256 to cover the 500x500 test images using 10 folds Unet. I didn’t managed to push the 500x500 unet as it went out of memory on a p2 instance (how did you do it ?). However, with a smaller tile 256x256, it sounds like the predictions were particularly good at the center of the tile but really weak on the edges and I didn’t have time to investigate further or to try Jeremy’s Tiramisu implementation.

Since there wasn’t any private leaderboard, I guess the way to go was to try overfitting the leaderboard, wasn’t it ?

Edit: there was actually a pricate leaderboard where I move to 12th position. Now i really regret not to had time to try Jeremy’s tiramisu…

Images resized to 496x496. I wanted to try with 500x500 size but the Conv2DTranspose layer gives error, and I could not figure out appropriate number of filters or strides (need to watch the lectures again). With a batch size of 8 there were no memory problem on 1080ti. Each epoch took around 15 seconds. No image augmentation. Here’s the model:

def get_unet():
inputs = Input((img_rows, img_cols, 3))
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = BatchNormalization()(conv1)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)


conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1)
conv2 = BatchNormalization()(conv2)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)


conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2)
conv3 = BatchNormalization()(conv3)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3)
conv3 = BatchNormalization()(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)

conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool3)
conv4 = BatchNormalization()(conv4)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4)
conv4 = BatchNormalization()(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)

conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool4)
conv5 = BatchNormalization()(conv5)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5)
conv5 = BatchNormalization()(conv5)

up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6)
conv6 = BatchNormalization()(conv6)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6)
conv6 = BatchNormalization()(conv6)

up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7)
conv7 = BatchNormalization()(conv7)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7)
conv7 = BatchNormalization()(conv7)

up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8)
conv8 = BatchNormalization()(conv8)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8)
conv8 = BatchNormalization()(conv8)

up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)
conv9 = BatchNormalization()(conv9)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9)
conv9 = BatchNormalization()(conv9)

conv10 = Conv2D(1, (1, 1), activation='sigmoid')(conv9)

model = Model(inputs=[inputs], outputs=[conv10])

model.compile(optimizer=Adam(lr=1e-5), loss=dice_coef_loss, metrics=[dice_coef])

return model