I am able to get the basic vgg_bn running. However, when I try to split the original model into conv_model and fc_model, the validation accuracy is terrible [This is the case with Dogs vs Cats as well as the Fisheries competitions)
Here’s the code that I implemented
model.load_weights(path+‘results/ft2.h5’)
conv_layers,fc_layers = split_at(model, Convolution2D)
conv_model = Sequential(conv_layers)
conv_feat = conv_model.predict_generator(batches, batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)
def get_bn_layers§:
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
BatchNormalization(axis=1),
Dropout(p/4),
Flatten(),
Dense(512, activation=‘relu’),
BatchNormalization(),
Dropout§,
Dense(512, activation=‘relu’),
BatchNormalization(),
Dropout(p/2),
Dense(8, activation=‘softmax’)
]
p=0.6
bn_model = Sequential(get_bn_layers§)
bn_model.compile(Adam(lr=0.0001), loss=‘categorical_crossentropy’, metrics=[‘accuracy’])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=15,
validation_data=(conv_val_feat, val_labels))
Also, here’s how I created my data for initial use:
batches = get_batches(path+‘train’, shuffle= True, batch_size = batch_size) #true
val_batches = get_batches(path+‘valid’, shuffle = False, batch_size = batch_size) #false
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
Please note: I tried shuffle = True for both train/validation batches and shuffle = false for both as well.
Regardless the accuracy is bad for the validation set