Kaggle 'The Nature Conservancy Fisheries Monitoring' competition

Kaggle just announced that the leaderboard for stage 1 will be wiped, and only the stage 2 submissions will count.

Yes, they are. I am trying to find some sort of common sense in their decision, but it has been a vain struggle so far. I have put some effort in this competition and, thanks to the MOOC, I have managed to climb within top10% of the leaderboard, which could bring the symbolic satisfaction of a Kaggle medal. Now they are basically taking this away by redesigning the rules in the last weekā€¦

Fortunately, they decided to award medals based on the number of teams in stage 1. Wendy added details in the thread.

The preliminary results are in and the sample submission benchmark ranked in 42nd place out of the roughly 2.3k entries!

There are actually only ~400 entries for stage 2, but still pretty shocking. I guess most of us needed to perform better validation from boat to boat. I wonder if using information from the sample submission would have helped too (e.g. rescaling each prediction so on average they match the sample submission).

1 Like

Good point, surprisingly few people submitted entries for stage 2.

I only submitted a simple pre-trained VGG model with bounding box regression on the annotations. Nothing more sophisticated because I had no clue how to do good validation to gauge my results. I think a lot of people felt like they were stumbling around in the dark. This has been a really good lesson for me though, and Iā€™m going to pay a lot of attention to getting it right in the future!

This thread has a nice discussion of some validation strategies.

1 Like

I think if I had more time to devote to this competition I would probably have tried a ā€œleave one boat outā€ cross validation strategy. I also didnā€™t even attempt fish localization.

Normally I would say I was happy with my results (top third out of stage 2 submissions), especially for the level of effort I was able to put into this, but both of the sample submissions beat me soā€¦ :confused:

1 Like

Well, not really surprisingly. Once rached the last stage of the competition, one has have learned as much as he-she is going to learn and the main reason to make another submission would be for kaggle points or medals. I would not have submitted, for instance, if I were not trying to get top 10%, because I had to rent another AWS machine and improve my code in order to handle the 12k test set without running out of memory. I think this is the primary reason why only ~400 people submitted.

My place is 151/390, poor results if you ask me. This is my second competition in kaggle(first one is dog vs cat fun competition) and the most difficult one, but it also give me chances to try new things, like detectNet, yolo-v2, mmod of dlib(deep learning version, failed miserable for fish detection), pseudo labeling, getting more familiar with keras, learn more ensemble methods from the posts of the other kaggles(I did not apply them in this competition, will try them on next competition) etc.

This competition is hard to create a robust, local validation set, because

1 : small data, less than 4000 images but we have to classify 8 classes of objects
2 : low diversity, many images looks similar
3 : ambiguity class, even humans are hard to differentiate ALB,BET,YFT
4 : imbalance data

If the purpose of this competition is challenge the limitations of deep learning, this is a nice data set. if their ultimate
goal is create a robust system to help them detect and classify fishes, I think this data set is far from ideal,

Tham, if you think fisheries is tough, just wait til you try the cervical cancer competition! :smirk:

Even smaller dataset than fisheries, some images are terrible (especially in the additional data they provided you), duplicates or near-duplicates, poor resolution & color, and lots of ā€œnoiseā€ in the pictures (like medical equipment, areas outside the cervix, etc). The region of interest does not have well defined edges and shapes like a fish. And it is difficult and even sometimes impossible me to tell the cervix type by just looking at it in many of the pictures.

I bet in this contest the leaderboard will completely switch when the final test dataset comes out! But even if I donā€™t do well, I am learning a TONā€¦ knowledge that will hopefully be useful in the future. Especially about overfitting and underfitting! :wink:

Iā€™m also doing the cervical cancer competition and yes it is very hard! Iā€™m now trying to predict bounding boxes for the cervix area inspired by this post: https://flyyufelix.github.io/2017/04/16/kaggle-nature-conservancy.html

Not really, my place is 151 and got a bronze medal, if it is based on the number of teams in stage 1 I should be top 7%.

I agree with you :slight_smile:, this kind of competition could tell us where are the limitation of machine learning today, we could use them as a guide to figure out what kind of results we could provide.

I donā€™t see anything wrong with that.

Out of 2000+ people who submitted that should be a gold medal (if they are actually going by stage 1 submissions).

Disappointed with my bronze too. :ā€™(

Using the table here; gold would be awarded to the top 10 + 0.002*2293 which is 14 rounded down to the nearest integer. Similarly, silver would be awarded to the top 114, and bronze the top 229. This is what you see on the leaderboard by the colour of the team names.

Ah, my mistake. Did not realize the 10/20/40% categories only apply to competitions with <100 teams. Guess I just missed silver.

1 Like