Statefarm kaggle comp

Ask questions or add your thoughts about the statefarm competition here!

1 Like

Hi Jeremy

I am running the statefarm competition but getting low accuracy results on both train and sample/train.
I have the data copied as suggested

statefarm/train/c*
statefarm/valid/c*
statefarm/test/unknown/*
statefarm/results/
statefarm/sample/
statefarm/sample/train/c*
statefarm/sample/valid/c*
statefarm/sample/results/

here is the code:

batch_size=64
from vgg16 import Vgg16
vgg = Vgg16()
path = DATA_HOME_DIR + ‘/’

Grab a few images at a time for training and validation.

NB: They must be in subdirectories named based on their category

batches = vgg.get_batches(path+‘train’, batch_size=batch_size)
val_batches = vgg.get_batches(path+‘valid’, batch_size=batch_size*2)
vgg.finetune(batches)

vgg.fit(batches, val_batches, nb_epoch=1)
vgg.model.save_weights(path+‘results/ft1.h5’)

vgg.fit(batches, val_batches, nb_epoch=1)
vgg.model.save_weights(path+‘results/ft2.h5’)

Here are the results:

Epoch 1/1
200/200 [==============================] - 7s - loss: 14.7481 - acc: 0.0850 - val_loss: 14.1839 - val_acc: 0.1200
Epoch 1/1
200/200 [==============================] - 8s - loss: 14.0433 - acc: 0.1200 - val_loss: 14.4263 - val_acc: 0.0800

There’s no reason that finetuning just the last layer in this should create a good statefarm model. The imagenet model was created for a very different purpose (recognizing objects in images, vs identifying whether a driver is distracted). So I think to get a good result in this competition you’ll need to take a somewhat different approach! Maybe you can experiment with a few different things on the sample and see if you can find something that looks promising… :slight_smile:

Let us know how you go!

1 Like

That sounds like a very interesting challenge… I will give it a shot… but I am not hopeful :slight_smile:

In the lesson I gave some tips on how to start. I’ll be posting the video soon - so hopefully that’ll be useful. If you get stuck, just ask - but I believe in you! :slight_smile:

3 Likes

I have a model that looks reasonable and I’m trying to creating the CSV file for Kaggle but I think what’s being returned from predict_generator with the test set as argument is the wrong shape:

test_batches, preds = sf.test(path + 'test')
Found 79726 images belonging to 1 classes.
preds
array([[  0.0000e+00,   1.0000e+00],
       [  0.0000e+00,   1.0000e+00],
       [  0.0000e+00,   1.0000e+00],
       ..., 
       [  6.1120e-36,   1.0000e+00],
       [  3.5104e-30,   1.0000e+00],
       [  0.0000e+00,   1.0000e+00]], dtype=float32)
preds.shape
(79726, 2)
sf.model.summary()
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
lambda_1 (Lambda)                (None, 3, 224, 224)   0           lambda_input_1[0][0]             
____________________________________________________________________________________________________
zeropadding2d_1 (ZeroPadding2D)  (None, 3, 226, 226)   0           lambda_1[0][0]                   
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D)  (None, 64, 224, 224)  0           zeropadding2d_1[0][0]            
____________________________________________________________________________________________________
zeropadding2d_2 (ZeroPadding2D)  (None, 64, 226, 226)  0           convolution2d_1[0][0]            
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)  (None, 64, 224, 224)  0           zeropadding2d_2[0][0]            
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D)    (None, 64, 112, 112)  0           convolution2d_2[0][0]            
____________________________________________________________________________________________________
zeropadding2d_3 (ZeroPadding2D)  (None, 64, 114, 114)  0           maxpooling2d_1[0][0]             
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D)  (None, 128, 112, 112) 0           zeropadding2d_3[0][0]            
____________________________________________________________________________________________________
zeropadding2d_4 (ZeroPadding2D)  (None, 128, 114, 114) 0           convolution2d_3[0][0]            
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D)  (None, 128, 112, 112) 0           zeropadding2d_4[0][0]            
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D)    (None, 128, 56, 56)   0           convolution2d_4[0][0]            
____________________________________________________________________________________________________
zeropadding2d_5 (ZeroPadding2D)  (None, 128, 58, 58)   0           maxpooling2d_2[0][0]             
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D)  (None, 256, 56, 56)   0           zeropadding2d_5[0][0]            
____________________________________________________________________________________________________
zeropadding2d_6 (ZeroPadding2D)  (None, 256, 58, 58)   0           convolution2d_5[0][0]            
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D)  (None, 256, 56, 56)   0           zeropadding2d_6[0][0]            
____________________________________________________________________________________________________
zeropadding2d_7 (ZeroPadding2D)  (None, 256, 58, 58)   0           convolution2d_6[0][0]            
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D)  (None, 256, 56, 56)   0           zeropadding2d_7[0][0]            
____________________________________________________________________________________________________
maxpooling2d_3 (MaxPooling2D)    (None, 256, 28, 28)   0           convolution2d_7[0][0]            
____________________________________________________________________________________________________
zeropadding2d_8 (ZeroPadding2D)  (None, 256, 30, 30)   0           maxpooling2d_3[0][0]             
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D)  (None, 512, 28, 28)   0           zeropadding2d_8[0][0]            
____________________________________________________________________________________________________
zeropadding2d_9 (ZeroPadding2D)  (None, 512, 30, 30)   0           convolution2d_8[0][0]            
____________________________________________________________________________________________________
convolution2d_9 (Convolution2D)  (None, 512, 28, 28)   0           zeropadding2d_9[0][0]            
____________________________________________________________________________________________________
zeropadding2d_10 (ZeroPadding2D) (None, 512, 30, 30)   0           convolution2d_9[0][0]            
____________________________________________________________________________________________________
convolution2d_10 (Convolution2D) (None, 512, 28, 28)   0           zeropadding2d_10[0][0]           
____________________________________________________________________________________________________
maxpooling2d_4 (MaxPooling2D)    (None, 512, 14, 14)   0           convolution2d_10[0][0]           
____________________________________________________________________________________________________
zeropadding2d_11 (ZeroPadding2D) (None, 512, 16, 16)   0           maxpooling2d_4[0][0]             
____________________________________________________________________________________________________
convolution2d_11 (Convolution2D) (None, 512, 14, 14)   0           zeropadding2d_11[0][0]           
____________________________________________________________________________________________________
zeropadding2d_12 (ZeroPadding2D) (None, 512, 16, 16)   0           convolution2d_11[0][0]           
____________________________________________________________________________________________________
convolution2d_12 (Convolution2D) (None, 512, 14, 14)   0           zeropadding2d_12[0][0]           
____________________________________________________________________________________________________
zeropadding2d_13 (ZeroPadding2D) (None, 512, 16, 16)   0           convolution2d_12[0][0]           
____________________________________________________________________________________________________
convolution2d_13 (Convolution2D) (None, 512, 14, 14)   0           zeropadding2d_13[0][0]           
____________________________________________________________________________________________________
maxpooling2d_5 (MaxPooling2D)    (None, 512, 7, 7)     0           convolution2d_13[0][0]           
____________________________________________________________________________________________________
flatten_1 (Flatten)              (None, 25088)         0           maxpooling2d_5[0][0]             
____________________________________________________________________________________________________
dense_1 (Dense)                  (None, 4096)          0           flatten_1[0][0]                  
____________________________________________________________________________________________________
dropout_1 (Dropout)              (None, 4096)          0           dense_1[0][0]                    
____________________________________________________________________________________________________
dense_2 (Dense)                  (None, 4096)          0           dropout_1[0][0]                  
____________________________________________________________________________________________________
dropout_2 (Dropout)              (None, 4096)          0           dense_2[0][0]                    
____________________________________________________________________________________________________
dense_4 (Dense)                  (None, 10)            8194        dropout_2[0][0]                  
====================================================================================================
Total params: 8194
____________________________________________________________________________________________________
filenames = batches.filenames
preds[:5]
array([[ 0.,  1.],
       [ 0.,  1.],
       [ 0.,  1.],
       [ 0.,  1.],
       [ 0.,  1.]], dtype=float32)

Shouldn’t the shape of preds be (79726, 10)?

Hi Chris,
I replicated this process using a fine-tuned Vgg16 model and my prediction dimensions were (79726, 10) . If your final layer has an output of 10 then that is what you should be getting back. Try running it again, if you get the same problem copy and paste your notebook into https://gist.github.com/ and save it with an .ipynb extension so we can see exactly what you did.

Thank you! I think I’ve figured out what I did:

  1. I called load_weights on a VGG model that had been fine tuned with a Dense(2) final layer
  2. I fine tuned again with a Dense(10) layer
  3. Then called test() without a call to fit()
1 Like

That will do it! Good catch!

Hi Garima,

I’m just cracking the book on the State Farm comp. How did you create the directory structure, if you don’t mind me asking?

I wrote a couple of little Python functions to make the directory structure. Like this:

def mkdirs(path):
    [os.mkdir('{}/c{}'.format(path, i)) for i in range(10)]
               
def mv_valid(path):
    for i in range(10):
        d = 'c{}'.format(i)
        g = glob('{}/{}/*.jpg'.format(path, d))
        shuf = np.random.permutation(g)
        for i in range(200):
            os.rename(shuf[i], shuf[i].replace('train', 'valid'))
1 Like

That validation set won’t work correctly - it’ll give very (very!) optimistic results. The competition data page says: The train and test data are split on the drivers, such that one driver can only appear on either train or test set. That means you need to do the same thing for your validation set creation!

6 Likes

@jeremy good to know, thank you!

@mattobrien415 I haven’t tried this yet, but I believe this would create the directory structure for you.

My model seems to get very poor results no matter what I do. The accuracy and validation accuracy seems to float around 10% and never increases. This is even more obvious when I look at the confusion matrix and it’s guessed that every image was of a single category.

I have moved 3 different drivers into my validation set thinking that was what Jeremy has been hinting at. I have set the layers from the first dense layer onwards to trainable since this data set isn’t predicting categories from imagenet.

I assume there is something fundamentally flawed with my model that it is only predicting 1 category. I have been considering using the techniques (data augmentation, batch normalization, dropout) but I am under the understanding that these techniques only help once you have a relatively good model. Is there something simple I’m missing?

1 Like

@brianorwhatever the challenge of getting a model to train at all on this dataset is part of what makes it such a good learning experience. So the process you’re going thru will hopefully be very rewarding :slight_smile:

It sounds to me like you’re on the right track. I’d suggest trying a variety of learning rates. Perhaps your learning rate is too high. I would suggest using batch normalization if you can - it makes initial learning much easier.

1 Like

Sounds good - I will keep plugging away. Thanks!

Brian I’m with you. My model is perfect… at classifying everything into the first class! I’m using a sample training size of 2000 and a validation set of 500 (drivers not in training set). Learning rate of .01. I popped the final VGG layer and added my new dense. I even tried going back a few layers but still everything in the first cat. Even though this is a small training set I figured I could at least get to 11% … then I wouldn’t feel like this was a bug on my part.

as suggested by Jeremy, I’ve found a lot more success with lower learning rates

2 Likes

Am I getting any warmer? This creates a validation set of 827 images.

import pandas as pd
def mv_valid():
    # read the CSV file into a pandas DataFrame
    dil = pd.read_csv(path + 'driver_imgs_list.csv')
    # group the frame by the subject in the image
    grouped = dil.groupby('subject')
    # pick a subject at random
    subject = groups.keys()[np.random.randint(0, high=len(groups)-1)]
    # get the group associated with the subject
    group = grouped.get_group(subject)
    # loop over the group and move the images into the validation directory
    for (subject, cls, img) in group.values:
        source = '{}train/{}/{}'.format(path, cls, img)
        target = source.replace('train', 'valid')
        print('mv {} {}'.format(source, target))
        os.rename(source, target)
3 Likes