Statefarm low accuracy, possibly related to validation set creation

Hey guys, I’m a late joiner to the course. Right now I’m trying the statefarm challenge, and since I joined late, I’m just starting by trying the notebooks Jeremy posted. Prior to running the statefarm-sample.ipydb, I wrote the following script to create a validation set, basically picking out 5 out of the 26 drivers. However, in running all of the methods Jeremy had, I got much lower validation accuracy. For example, when using the single convolution layer approach on the full dataset (the first attempt in statefarm.ipydb), I only have 15.5% accuracy versus Jeremy’s 61%. Can anyone help? Thanks a lot!

I’m using Keras 2, Theano 0.9.0 and Python 3 on a personal computer with 16G memory and GTX 1080.

import pandas as pd

driver_img_list = pd.read_csv('driver_imgs_list.csv')
valid_driver_list = driver_img_list['subject'].unique()[:len(driver_img_list['subject'].unique())//5]
driver_img_list.set_index('subject', inplace=True)
valid_imgs = [item[0] for item in driver_img_list.loc[valid_driver_list,:][['img']].values]
for i in range(0,10):
    os.mkdir('../valid/c'+str(i))
from shutil import move
g = glob('c?/*.jpg')
for file in g:
    if file[3:] in valid_imgs:
        move(file, '../valid/' + file)

Hi,

I used pandas, with the following code to create my validation set:

drivers = df.groupby('subject').groups.keys()
# 5 drivers = 20% of all drivers
# NB: test data includes drivers that we have not
# seen so in validation set we also need similar data
drivers_for_validation = random.sample(drivers, 5)

Hope you might find this helpful, good luck!

Are you running this against the full or sample dataset?

Also, the scores you are getting are not much better than guessing at random, and this may in fact be due to how you are creating your training and validation batches. If the labels don’t match the images, this kind of thing happens.

Thanks. Using random drivers as validation(I used the first 5 drivers before) improved things, but I’m still not getting to Jeremy’s level. For the full sample convolution (no augmentation etc), I’m getting 53% accuracy compared to Jeremy’s 61%. Have you experienced similar things? Is this because of random initialization and train/validation split, or something I did wrong? Thanks!

Hi.

Sorry, been away for a while. Are you still experiencing this problem?

You are not supposed to choose random drivers, but make sure that there are drivers in your test set that are NOT in your training or validation set. Otherwise the network will just learn to recognize the drivers, not the action they are doing.