Understanding kaggle ranking and improving it

I finally was able to work through lesson 1 and submit my predictions to kaggle for the cats and dogs redux. It says my score is 0.10258. I assume that is and error rate of 10%, not a success rate of 10%? What are folks getting on their first runs? Any suggestions of places I should look to improve that score?

I’m going to try a higher number of fit epochs an see how that works.

I’ve also seen this cell in the notebook:

#Not sure if we set this for all fits
vgg.model.optimizer.lr = 0.01

I’m not sure what we are not sure about, nor where that comes from. Can somebody point me to a resource about the learning rate optimizer? Thanks.

The score on the dos vs cats competition is a log loss - you can find more information about it including how it’s calculated on kaggle’s wiki here.

The lr is the learning rate - basically, once you know in which direction you need to tweak the parameters of your model to get a better result, this gives the size of the step to take. The error surface stochastic gradient descent needs to walk down is very complex and non-linear and the gradient tells us something that is only valid for a relatively small region, thus taking too large of a step can prevent us from converging to a good solution.

I’m on lesson two and I don’t think this has been discussed yet, but my guess is that tweaking optimization parameters (or switching the optimization method all together) will likely be discussed later when we start looking at the finer details.

0.10 is a very nice result so good job :slight_smile:

1 Like

The score of 0.10258 doesn’t mean that your error rate is 10%. Your accuracy (1 - error rate) is likely something close to your validation accuracy, e.g. 97 - 99% (3 - 1% error rate).

The score is log loss, which allows you to say how confident you are of your predition… Logloss penalizes you more if you are overconfident of your prediction. You can test this by clipping all your predictions probablities to 0.999 for dogs (0.001 for cats) and compare it to 0.97 (0.03). The latter, the less confident, prediction leads to better score, because in the case of the mistake, you are penalized less.

Thanks everybody for the tips and thoughts. I watched lesson 2 and now I understand it much better. It is just another loss/cost function like least squares, but using the log of the error to say how well our predictions fit the actual data.

I was thinking in terms of the labels, i.e., [0,1], and how an answer was right or wrong. I wasn’t thinking in terms of the actual probability answers we submit and how right or wrong an answer was.

For the learning rate I see now that it is the rate of of the steps that gradient descent is taking. I think I need to poke around in the Keras documentation more.

At the moment I’m in the top 31%! I’m going to give it a couple more tries and see if I can improve that.

Great work with ranking in Dogs vs Cats. And keep improving. I was able to get to top 10% in Dogs and Cats with Lesson 3 methods.

Thanks for the encouragement. I haven’t finished lesson 3 yet. With only lesson 2 I’ve been able to get into the top 20%. My latest tries on changing the learning rate have not been so successful.

1 Like