In the Lesson 2 video, it appears that Jeremy is achieving approximately 98% validation accuracy. I’m not able to achieve greater than 92% accuracy. Further, no matter how much I train (with variable learning rates), my training loss/accuracy fails to improve.
Might you have any advice on how I can improve my result? I’ll provide context below.
Environment: I"m running on my own GTX 1060 that can handle a maximum batch size of 32.
Variables that I have explored:
- Batch size (ranging from 8 to 32)
- Learning rate (from {0.1, 0.05, 0.01, 0.005, 0.001})
- Increased training (up to 15 epochs)
No matter the variable explored, I top out at 92%.
Because we create the validation set at random, I thought it might be possible that I had randomly generated a challenging validation set. However, I attempted to re-seed and generate a new validation set only to find the same plateau in training/validation accuracy.
For context, I provide my code. I manually add the “fine-tuning”:
from vgg16 import Vgg16
vgg = Vgg16()
vgg.model.pop()
for layer in vgg.model.layers:
layer.trainable = False
vgg.model.add(Dense(2, activation='softmax'))
Create the batches with:
batch_size = 24
batches = get_batches(train_path, batch_size=batch_size, shuffle=True)
val_batches = get_batches(valid_path, batch_size=batch_size, shuffle=True)
And, I run training intervals with the following function that both calls vgg.model.compile() (to set the learning rate) and vgg.model.fit_generator() (to fit the training data):
def run_epochs(last_epoch, no_of_epochs, learning_rate):
vgg.model.compile(optimizer=RMSprop(lr=learning_rate), loss='categorical_crossentropy', metrics=['accuracy'])
print "Running %d additional epochs with lr=%f." % (no_of_epochs, learning_rate)
latest_weights_filename = (results_path + 'ft%d.h5' % last_epoch) if last_epoch > 0 else None
print "Loading weights: %s" % latest_weights_filename
if latest_weights_filename:
vgg.model.load_weights(latest_weights_filename)
for epoch in range(last_epoch+1, last_epoch + no_of_epochs + 1):
print "Running epoch: %d" % epoch
vgg.model.fit_generator(
batches, samples_per_epoch=batches.n, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.n)
latest_weights_filename = results_path + 'ft%d.h5' % epoch
vgg.model.save_weights(latest_weights_filename)
print "Saved weights: %s" % latest_weights_filename
print "Completed %s fit operations" % no_of_epochs
Any feedback would be greatly appreciated.