I was trying to implement a single layer model on top on VGG predictions to convert 1000 categories predicted by VGG to dogs vs cats, very similar to lesson2 notebook, but the validation accuracy drops close to 50% which is as good as random. I think it shouldn’t be this low but I am unable to figure out where am I going wrong.
vgg =Vgg16()
model = vgg.model
batch_size=64
nb_epoch=5
val_batches = get_batches(val_path, batch_size=batch_size)
trn_batches = get_batches(train_path, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = trn_batches.classes
val_labels = to_categorical(val_classes, val_batches.nb_class)
trn_labels = to_categorical(trn_classes, trn_batches.nb_class)
trn_feat = model.predict_generator(trn_batches, trn_batches.nb_sample)
val_feat = model.predict_generator(val_batches, val_batches.nb_sample)
lm = Sequential([Dense(2, activation='softmax', input_shape=(1000,))])
lm.compile(optimizer=RMSprop(lr=0.1), loss='categorical_crossentropy', metrics=['accuracy'])
lm.fit(trn_feat, trn_labels, batch_size, nb_epoch, validation_data=(val_feat, val_labels))
and here is the result of training :
Train on 23000 samples, validate on 2000 samples
Epoch 1/5
23000/23000 [==============================] - 1s - loss: 0.7207 - acc: 0.4984 - val_loss: 0.7199 - val_acc: 0.4940
Epoch 2/5
23000/23000 [==============================] - 1s - loss: 0.7171 - acc: 0.5175 - val_loss: 0.7583 - val_acc: 0.5100
Epoch 3/5
17920/23000 [======================>…] - ETA: 0s - loss: 0.7133 - acc: 0.525323000/23000 [==============================] - 1s - loss: 0.7168 - acc: 0.5227 - val_loss: 0.7425 - val_acc: 0.5005
Epoch 4/5
23000/23000 [==============================] - 1s - loss: 0.7198 - acc: 0.5244 - val_loss: 0.7814 - val_acc: 0.5025
Epoch 5/5
23000/23000 [==============================] - 1s - loss: 0.7214 - acc: 0.5288 - val_loss: 0.7587 - val_acc: 0.4905
<keras.callbacks.History at 0x7ff29905b250>
Any idea what I am missing ?