Statefarm kaggle comp

Great work! Looks like you are putting on your white lab coat and experimenting like a prof!
I noticed that you started with Adam as the optimizer and later switched to SGD for the same model … interesting approach … might work I guess. Also the validation data has gen_t … this might be not necessary as the test images may not have skewness properties etc so it might not be a true reflection of the test folder. So you might actually get better validation accuracy if you dont use image generator.

hey ved… i didnt get change to sgd… since my accuracy for validation started going up … but thats what i was planning on doing next… :slight_smile:

@jeremy did you mean change Dense 200 to Dense 100 or add another Dense 100?

Does augmentation for validation makes sense as well?

I saved the weights from all my testing yesterday.

I am hoping to reload those today so I dont spend hours getting to 94%.
If I Added a new dense layer, do I need to discard all the older weights and learning and start again?
If I updated Dense 200 to Dense 100, do I need to discard all the older weights and learning and start again?

Thanks
garima

I meant to change - since you started seeing you validation go down, which means you are overfitting too much.

No it doesn’t make sense to augment validation data, since then it wouldn’t be an accurate validation.

When changing your layers you’ll need to start again - sorry!

1 Like

With some more work, my model is a lot better now.
95% on training set and 59% on validation is the best I got so far.

Epoch 9/10
20570/20570 [==============================] - 340s - loss: 0.2268 - acc: 0.9405 - val_loss: 1.2925 - val_acc: 0.5930
Epoch 10/10
20570/20570 [==============================] - 339s - loss: 0.1867 - acc: 0.9525 - val_loss: 1.4244 - val_acc: 0.5662

My Kaggle score (1.07940) is not ideal but I think if I can train this model some more it will get better.

:slight_smile:
I think I am ready to move on from state farm.

One more question for you @jeremy If I were to save my weights so far, and come back later, and load these weights and train some more WITHOUT changing my model architecture, I should be able to continue where I left?

Thanks
garima

You’re into the top 50% of the leaderboard, so that seems like a reasonable time to move on :slight_smile: . The currently running fisheries competition has some similar issues, so you may be able to leverage your experience with statefarm there…

Yup, you can just save_weights() now, and load_weights() any time later, as long as your architecture is the same.

Congrats!

Thank you :slight_smile: couldn’t have done it without your guidance !

1 Like

I am running the statefarm notebook, my instan is p2.xlarge (60 GB) and I run out of memory when it concatenates below. I am using lad_array which loads 30gb, at this point only 20GB of RAM remain free, so the concatenation crashes. You mention that we should be able to use regular batches instead of load_array, but I’m not sure how, specially if we need to concatenate the data augmentation features.

30 GB

da_conv_feat = load_array(path+‘results/da_conv_feat2.dat’)
Let’s include the real training data as well in its non-augmented form.
In [27]:

da_conv_feat = np.concatenate([da_conv_feat, conv_feat])

MemoryError Traceback (most recent call last)
in ()
----> 1 da_conv_feat = np.concatenate([da_conv_feat, conv_feat])

MemoryError:

1 Like

For now, don’t concatenate the augmented data with the original data - and just use batches. You can use ‘bcolz.open()’ to map the array on disk directly, without opening it, or else simply use batches without precomputing (much easier!)

Tonight I’ll show how to combine batches together :slight_smile:

2 Likes

You can also save the entire model (weights included):

https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model

2 Likes

thanks,
without concatenation, the score is 0.622, which would rank on top 50%

2 Likes

Major understatement! It would rank in the top 25%!!!

I am finding very different validation accuracy results between 1) training a model using predicted features and 2) adding conv_model and dense models and then training the model. I am pretty sure I am not doing something right here … hopefully a fresh pair of eyes will point me to the right direction: https://anaconda.org/vedaustin/state-farm-aws-ver-3-1-gist/notebook

Since you’re using your training batches to precalculate your conv features, you need to add shuffle=False to that constructor :slight_smile:

haha … ofcourse . damn it! :slight_smile:

Hello everyone,

After getting a validation accuracy (validation set had 2000 images, only 4 misclassifications) of 99.85% on cats and dogs redux, I was pumped to get cracking on StateFarm. I tried training my own model from scratch using @jeremy’s notebook (available here) and then wanted to use the pre-computed VGG weights for convolution layers.

My bare-boned model from scratch gives OK-ish results (it overfits badly) but I’m not too concerned about that at the moment coz I’d rather use the pre-computed weights. Here’s where the problem begins…

When I use the pre-computed VGG weights (convolution), my accuracy stays pretty low and never goes above 14%. After reading this thread, I realized that my directory organization structure may be wrong; so I checked it, it was ok. I also noticed that a lot of people played with dropout, learning rates etc. I did exactly the same. I tried a bunch of Dropouts and a bunch of learning rates, both, high and low; I went as low as 1e-7 but it didn’t seem to help.

I feel a little stumped as I can’t seem to figure out what’s going wrong with my model.
My notebook may be found here.
My notebook for directory organization may be found here.
Would anyone happen to know what I’m doing wrong?

TIA.

Hi,
I am having the same problem. I get good results with CNN but an accuracy of 0.1 with pre-computed VGG model that @jeremy uses in statefarm notebook. Let me know if you figure out what the problem is. I will do the same.

1 Like

@prateek2686 When you fit your conv features, conv_feat was created from shuffled batches, but trn_labels was not shuffled, so they don’t match. You need to not shuffle the batches used to create conv_feat.

fc_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4, 
             validation_data=(conv_val_feat, val_labels))
5 Likes

@layla.tadjpour see my answer above

Are you using a random set of images? 10% accuracy is what I remember always seeing when not creating a proper test set. Make sure that you are using different drivers in the validation set and test set. Hope that helps!