Statefarm kaggle comp

@jeremy in the statefarm.ipynb you say:

I’m shocked by how good these results are! We’re regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition.

I don’t understand how you came up with the 75%-80% accuracy number based on the output above the quote. Could anyone explain?

Hey @adhamh,

I think he’s refering to the column “val_acc”, in the 15 epochs run, where it peaks over 74-78%.

E.

How do we know that an image is that of a particular driver? Is it encoded int he filename? I cannot make out how to derive it.

Hey all,

I’m having issues submitting to kaggle.

All of my submissions are getting rejected, with the error message “Evaluation Exception: Submission must have 79726 rows”

When I run the following command on my file

cat state-farm-submission.csv | wc
I do indeed get
79726 79726 4548287

Help? What am I missing?

The code I’m using to generate the file is the following:

I also get this memory error problem predicting the test conv layers. I am using python3 and keras2. Did you ever find out what causes it?

Seems to me that the outputs are only 70K * 10. The weights in the conv layers should be exactly the same size per batch for the test data as train/valid data. And there should never be more than one batch in memory at any one time. I have reduced my batch size from 64 to 8 just to make sure and still get memory error.

My guess is that there is some sort of memory leak inside predict_generator where it is using up memory for each batch. If so then the answer is to run the predict and save after each batch as you have done; or maybe run it in 4 chunks and save after each.

Ah my mistake!!.. I was thinking the output features were 70K10 as they would be from the whole model. However the output of the conv layers is 70K5121414.

Also the space taken up by python numbers is bigger than I realised:

  • 28 bytes per int
  • 24 bytes per float
  • 8 bytes np.array per cell (plus overhead less some compression for zeros I think?)

Anyway there is no memory leak here. The output is massive so the only answer is to save it as you go along.

2 Likes

i got 98 precent in 4 epochs and i tested it -
it works

very simple model 98 precent in 4 epochs.
i skipped the import at start so here is the rest of the code:

from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D 
from keras.layers.normalization import BatchNormalization
model = Sequential()

model.add(Conv2D(16, 3, input_shape = (32, 32, 3), activation = 'relu'))

model.add(Conv2D(16, 3,  activation = 'relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2)))

model.add(Conv2D(32, 3,  activation = 'relu'))       
model.add(Conv2D(32, 3,  activation = 'relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2))) 

model.add(Flatten())
model.add(Dense(10, activation='softmax'))

from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.1,
                                   zoom_range = 0.1,
                                               )

test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('train',
                                                 target_size = (32, 32),
                                                 batch_size = 4,
                                                 class_mode = 'categorical')

test_set = test_datagen.flow_from_directory('val',
                                            target_size = (32, 32),
                                            batch_size = 4,
                                            class_mode = 'categorical')
model.compile(optimizer=SGD(lr=0.01, momentum=0.0, decay=0.0, nesterov=False), loss='categorical_crossentropy', metrics=['accuracy'])

model.fit_generator(training_set,
                         steps_per_epoch = 22400/4,
                         epochs = 2,
                         validation_data = test_set,
                         validation_steps = 4000/4)`

Hey Avi !
Great results but I’m afraid it looks just too good to be true :slight_smile:
Did you make sure to split you training/validation data by driver ?
Also did you try predicting on the test set and submitting results to the Kaggle competition in order to see exactly how well you’re doing ?

Good luck and have fun !

1 Like

Congrats! We talk about this technique a bit in lessons 9 and 10, so you’ll be ahead of the curve when you get there :slight_smile:

I’m only on lesson 4, so apologize if this question is just getting ahead of myself – but I did a little reading on the winning entry, and it looks like the foundation is a single Vgg16 model which got the participant to a leaderboard score of 0.3 (the winner then did a bunch of other cool stuff to get down to #1). This is very impressive to me as I’ve battled to get a private leaderboard score of 0.68 (public 0.81), and can’t seem to get beyond this threshold. I was wondering if anyone had tips / tricks to get a better score with a single vgg model? Has anyone achieved a score anywhere close to the 0.3 range with a single vgg? Would love to hear your thoughts / approaches!

Thanks, chris, for sharing this!

To whom it may concern… I changed the function a litte to make it more transparent to me what I am doing. This moves the first three drivers to the validation set:

df = pd.read_csv(path + 'driver_imgs_list.csv')
df.columns = ("driver", "label", "filename")
drivers = df.driver.unique()

val_drivers = drivers[:3]
print(val_drivers)

grouped = df.groupby('driver')

for val_d in val_drivers:
    print(val_d)
    df_driver = grouped.get_group(val_d)
    print(df_driver.values)

    for (subject, cls, img) in df_driver.values:
        source = '{}_train/{}/{}'.format(path, cls, img)
        target = source.replace('_train', '_validate')
        print('mv {} {}'.format(source, target))
        os.rename(source, target)
1 Like

I created a little notebook that allows me to create various train/validation splits (default of 5 folds) where the training and validation data always contain separate drivers. It yields roughly an 80/20 data split each time. This could be used to create a 5-model ensemble for example by re-configuring train/val between each training run.

Hi,

In the statefarm problem, I tried using transfer learning with the help of keras’s vgg16 model. When I shuffle the training data (set shuffle = True), the accuracy I get on training data is abysmal. When shuffle = False, I get a good training accuracy. Why does setting shuffle = True in get_batches() on the train data cause such differences in accuracy? Shouldn’t shuffle=True lead to better accuracy?

The code I have used is as follows:

import numpy as np
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from keras.applications import VGG16
from keras.utils.np_utils import to_categorical

model = VGG16(include_top=False, weights='imagenet')

batch_size = 64

datagen = ImageDataGenerator(rescale=1. / 255)

def get_batches(dirname, gen = image.ImageDataGenerator(), shuffle=True, batch_size=batch_size):        
    batch_gen = gen.flow_from_directory(dirname, target_size=(224,224), 
            class_mode='categorical', shuffle=shuffle, batch_size=batch_size)    
    num_batch = len(batch_gen)
    return batch_gen, num_batch

generator, num_train_batches = get_batches('./data/train', gen=datagen, shuffle=False)

train_labels = to_categorical(generator.classes)

train_data = model.predict_generator(generator, num_train_batches)

generator, num_valid_batches = get_batches('./data/valid', gen=datagen,
                                                shuffle=False, batch_size=batch_size * 2)

validation_labels = to_categorical(generator.classes)

model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))

model.compile(optimizer='adam',
              loss='categorical_crossentropy', metrics=['accuracy'])

model.optimizer.lr = 1e-5

model.fit(train_data, train_labels,
          epochs=5,
          batch_size=batch_size,
          validation_data=(validation_data, validation_labels), verbose=1)

When shuffle=False:-
Train on 17968 samples, validate on 4456 samples
Epoch 1/5
17968/17968 [==============================] - 4s 224us/step - loss: 1.7476 - acc: 0.4617 - val_loss: 1.4700 - val_acc: 0.6914
Epoch 2/5
17968/17968 [==============================] - 4s 197us/step - loss: 0.7867 - acc: 0.8535 - val_loss: 1.0974 - val_acc: 0.7417
Epoch 3/5
17968/17968 [==============================] - 4s 197us/step - loss: 0.4472 - acc: 0.9335 - val_loss: 0.9330 - val_acc: 0.7554
Epoch 4/5
17968/17968 [==============================] - 4s 199us/step - loss: 0.2926 - acc: 0.9613 - val_loss: 0.8638 - val_acc: 0.7655
Epoch 5/5
17968/17968 [==============================] - 4s 197us/step - loss: 0.2131 - acc: 0.9726 - val_loss: 0.8067 - val_acc: 0.7747

When shuffle=True:
Train on 17968 samples, validate on 4456 samples
Epoch 1/5
17968/17968 [==============================] - 4s 215us/step - loss: 2.3914 - acc: 0.0995 - val_loss: 2.2935 - val_acc: 0.1086
Epoch 2/5
17968/17968 [==============================] - 4s 201us/step - loss: 2.3025 - acc: 0.1113 - val_loss: 2.2982 - val_acc: 0.1241
Epoch 3/5
17968/17968 [==============================] - 4s 199us/step - loss: 2.2991 - acc: 0.1182 - val_loss: 2.2997 - val_acc: 0.0866
Epoch 4/5
17968/17968 [==============================] - 4s 203us/step - loss: 2.2961 - acc: 0.1163 - val_loss: 2.2995 - val_acc: 0.0880
Epoch 5/5
17968/17968 [==============================] - 4s 201us/step - loss: 2.2928 - acc: 0.1229 - val_loss: 2.3032 - val_acc: 0.0911

I am trying to train the data augmented which is 5 times the data set.Since it leads to memory error, was trying the BcolzArrayIterator class https://github.com/jph00/part2/blob/master/bcolz_array_iterator.py

But when I use it as specified in the quote to run the model.fit_generator, I get the below error:


It seems the keras fit_generator is not recognizing the BcolzArrayiterator type when given as the input generator…

Has anybody has thoughts on this…

Hi Jeremy, I am having two questions. I hope you will help me. Currently am working on capstone project with deadline in a week more.
I am getting good accuracy and less loss on both train and valid sets. There is no over fitting when i draw the graph for acc, val_acc and loss,val_loss
Am facing issues while predicting as am running out of memory to load test images.
so i shifted to use generator on test images. 79726 images are there so i selected batch_size = 32.
here is my questions:

  1. while iterating test batch using batch size, after 2491 iteration, i left with 14 images to batch so during this scenario what batch will do, take the 14 images and extra 18 images which already predicted? If yes how to avoid it and predict only 79726 images.
  2. And how to validate my predictions as now i can’t push to kaggle to see the score as competition is closed.

Thanks and looking forward to listen!!!