Kaggle Questions


(Jakub Arnold) #63

This is probably a silly question, but what does the Kaggle submission score mean? Is it accuracy percentage on the test set? Or is it a percentile?

I’ve tried two submissions so far, one tested on a very small training set (100 images) just to see if it works, and another tested on 2000 images, and they got 31.18 and 32.86 score respectively (but both scored around 99% accuracy on the validation set).


#64

Each kaggle competition has a page explaining how it is scored. A common one is logloss.


(Florian) #65

Good morning,
I am complete new to this lesson, did my setup of a p2 instance and made my way through lesson 1. It was great fun and i took a lot out of it so far. Thanks for the great course. As proposed I made an account at kaggle, choosed d&c redux, confirmed the rules and uploaded a few different submission files. (Just minor difference) The best score I got was 0.09185, somewhere in the 300dreds up
Looking under my submissions I find alll of the uploads, however I cannot find myself in the leaderboard. Am I doing something wrong here? A search inside kaggle did not enlight me so far.
Looking at the score my best submission was rated, it is far far away from the top 50. Up to now I run 4 epochs, which indeed gave some improvements, but nothing which would encourage me that after a few more epochs I would arrive within the top 50.
I use the correct label (the ones for dogs) I played with the logloss values, which shows some results but again nothing which let me think that it could change the game.
So I am a bit stuck. Either I got something very wrong, or the last 6 months changed the quality level of this competition in a way, that further, but at the moment unknown steps are necessary to make significant improvements
Thanks in advance for every hint


(Yu Shen) #66

Yes, the pre-condition for ‘kg download’ to work is having accepted the competition rules by going to the competition page and clicking the button to join (“late participation”).

It would be nice to add the reminder to the wiki. It seems as a newcomer, I may not have the privilege to sign-in and provide the update.


(Pavel Surmenok) #67

it is far far away from the top 50

There is no requirement to get into top 50. The task for lesson 1 was “try to get in the top 50%”, much easier task than getting into top 50.

or the last 6 months changed the quality level of this competition in a way, that further, but at the moment unknown steps are necessary to make significant improvements

Public leaderboard changed since the time first few lessons were recorded. In the lesson 2 video, you can see that there were 149 positions on the public leaderboard at that time. Now there are 1314 positions, and scores in the top of the leaderboard got better. To be in the top 50% you have to be better than position 657 that has score 0.12204. So it seems like you got into top 50% with 0.09185 score!


(vee) #68

Is there way to know our percentile, or do we just calculate based on the score?


(Pavel Surmenok) #69

I was calculating it based on the score. I think they don’t show new users on the leaderboard because the competition has completed.


(Max Howarth) #70

I’ve had some interesting results from submitting my scores to the Kaggle competition.

On my first attempt, I realized that I had only used a sample set of data to train the model using only 1 epoch. (Am I using this term correctly? Or should it be: “…sample set of data with 1 training epoch.”)

My first score was 0.12370.

I decided to run the prediction again, however, this time I switched to the full training set, and ran 3 epochs.

When I submitted my results, the second score was 0.18837!

The accuracy of each model, according to the output in my Jupyter notebook, was ~87% and ~98% respectively.

Given that the reported accuracy is higher for the second model, why would the score from Kaggle be lower?

The only reasons I can think of are:

  1. A peculiarity of the scoring system.
  2. The first model got “lucky”?

Any ideas?


(Soen Surya Soenaryo) #71

It’s quite opposite to mine.

My first approach is using 3 epochs w/ 0.01 learning rate. It gave me 0.11392.
After that, I changed my learning rate. Actually, the result in my VM became worse than the first one, but in Kaggle, they gave me 0.09591.
I tried to go back with default learning rate, but using more epochs. The result was better than the first one, which is 0.10277, but it’s worse than the second one.

So, I think changing learning rate may give a better impact in this case. I will take another attempt with smaller learning rate and more epochs later.
I assume it may give a better result (in Kaggle Evaluation Alr., at least.)