Kaggle ‘Intel & MobileODT Cervical Cancer Screening’ competition

Maybe something like this? https://aws.amazon.com/about-aws/whats-new/2013/01/08/use-amazon-cloudwatch-to-detect-and-shut-down-unused-amazon-ec2-instances/

3 Likes

Thanks for your detailed comments, appreciate it.

Cool, did you use the additional data set? or just train.zip data …

And for conv blocks what was your thought process in deciding number of filters , max pooling strides etc?

I just used train.zip.

I didn’t give much thought to filters, strides etc. Literally used the first example here.

Here’s the model in its entirety:

model = Sequential()
model.add(Conv2D(32, 3, 3, input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(3))
model.add(Activation('softmax'))
1 Like

What kind of data augmentation do you use?

train_datagen = ImageDataGenerator(
        rescale=1. / 255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)

validation_datagen = ImageDataGenerator(rescale=1. / 255)

I used 150x150 as the image dimensions.

1 Like

Interesting. I got to 0.98854 on the leaderboard with squeezenet, 245 image size and no augmentation.

Weird, I’ve read through that same article a while back and couldn’t get under 1.2 leaderboard score. I’ll have to give it another go.

Currently, I have a model that just seems stuck around .58 accuracy and 0.89 loss (on training/validation data).

…but not to get too distracted, I’ve moved on to RNNs. Trying to finish up Part 1 this month.

‘Weird’ is a good way to put it :slight_smile:

I have experimented with a number of things and when something improves the final val loss, I am happy to see that but am no smarter than before as to why it helped.

A big challenge for me with this dataset is the inability to look at an image and recognize what type it is (unlike the dogs vs cats, for example). So I can’t look at the examples the model got wrong and figure out how I can make it better. All my experimenting has really been ‘brute-force search’ than anything more thoughtful.

That is quite difficult yes. Have you seen the MobileODT webinar? It did help me somewhat: https://youtu.be/N8OGrykSgy0

1 Like

I had not seen this … thanks for the link!

Hi all, I was able to get around 0.83 LB score.Below is approach

Precompute vgg16bn features for all(initial_train, initial_valid, additional_train, additional_valid, test) with image size 448x448 .Be sure to remove useless and corrupted images(check for them in forums)

Use this as input to simple (MaxPool-(Dense(4096)-Dropout(0.6)-Batchnorm)x2-Dense(3,softmax)). Train this for 3 epochs on the entire data.

Build 5 such models and average their predictions.

I tried data augmentation but it doesn’t seem to converge as fast non-augmentation approach.(Maybe I need to experiment it more)

2 Likes

Pretty good after just 3 epochs. Did the score not improve when running more epochs?

It was overfitting @rashudo

Hi Ravi,
How is this different than just setting the convolutional layers of the complete VGG16bn model to be non-trainable? It appears to me that you are just doing it in two stages?

Did you train any of the conv layers at all, or just use the pre-trained weights?

No retraining of conv layers. I just computed the conv features and stored them on hard-disk @Christina

Okay thanks - I think the main advantage of this way would be speed. You are computing the conv features each time you fit the model if you don’t precompute (but use the whole VGG16bn instead).

1 Like

Hi @rteja1113, thanks for sharing! I just did one submission and ranked 600+ out of the 661 team:flushed: There are lots more for me to try. The ensemble method you used seems to be very promising.

I just realized that, when I row-wise sum up the predicted probabilities, the sum is not always exactly 1.0000. Sometimes it is 0.999XXX and sometimes 1.000XXX. I was wondering why this happens. Do you have a guess that how much would this affect the LB score?

Hi @shushi2000, the reason you are getting like that is because floating point operations are not exact in computers.for ex: 0.1 + 0.2 may get you 0.300001 . Everyone are limited by this problem.So, I don’t think it effects the LB too much.

I also did the same the network on resnet features. I averaged all models and now I’m able to get around 0.76471.
EDIT: Forgot to mention that I did a clipping of 0.15

Thank You

1 Like

Fascinating! Again, thank you very much! I am wondering how different it is between the average of 5 sets of predictions from vgg and that from resnet. Just curious.
Can we make this generalization: the more sets of predictions we use, the better LB result will we get?