Poor accuracy by linear model on top of VGG


(Akshay Kumar) #1

I was trying to implement a single layer model on top on VGG predictions to convert 1000 categories predicted by VGG to dogs vs cats, very similar to lesson2 notebook, but the validation accuracy drops close to 50% which is as good as random. I think it shouldn’t be this low but I am unable to figure out where am I going wrong.

vgg =Vgg16()
model = vgg.model
batch_size=64
nb_epoch=5
val_batches = get_batches(val_path, batch_size=batch_size)
trn_batches = get_batches(train_path, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = trn_batches.classes
val_labels = to_categorical(val_classes, val_batches.nb_class)
trn_labels = to_categorical(trn_classes, trn_batches.nb_class)
trn_feat = model.predict_generator(trn_batches, trn_batches.nb_sample)
val_feat = model.predict_generator(val_batches, val_batches.nb_sample)
lm = Sequential([Dense(2, activation='softmax', input_shape=(1000,))])
lm.compile(optimizer=RMSprop(lr=0.1), loss='categorical_crossentropy', metrics=['accuracy'])
lm.fit(trn_feat, trn_labels, batch_size, nb_epoch, validation_data=(val_feat, val_labels))

and here is the result of training :
Train on 23000 samples, validate on 2000 samples
Epoch 1/5
23000/23000 [==============================] - 1s - loss: 0.7207 - acc: 0.4984 - val_loss: 0.7199 - val_acc: 0.4940
Epoch 2/5
23000/23000 [==============================] - 1s - loss: 0.7171 - acc: 0.5175 - val_loss: 0.7583 - val_acc: 0.5100
Epoch 3/5
17920/23000 [======================>…] - ETA: 0s - loss: 0.7133 - acc: 0.525323000/23000 [==============================] - 1s - loss: 0.7168 - acc: 0.5227 - val_loss: 0.7425 - val_acc: 0.5005
Epoch 4/5
23000/23000 [==============================] - 1s - loss: 0.7198 - acc: 0.5244 - val_loss: 0.7814 - val_acc: 0.5025
Epoch 5/5
23000/23000 [==============================] - 1s - loss: 0.7214 - acc: 0.5288 - val_loss: 0.7587 - val_acc: 0.4905
<keras.callbacks.History at 0x7ff29905b250>

Any idea what I am missing ?


(Matthijs) #2

Not 100% sure what’s going on here but it seems as though you’re adding a Sequential layer on top of the full VGG16 model. So you’re adding a classifier after the existing classifier, while you’re supposed to replace the existing classifier with your own.

In other words, the “features” you’re extracting from VGG aren’t really features at all, but the probabilities over the 1000 ImageNet classes.


(Akshay Kumar) #3

Yes you got it right. I am adding a dense layer which takes the 1000 predictions for each sample as input features and predicts probabilities of being a dog and a cat. We can also replace the last layer as you are saying, jeremy applies both approaches in lesson2 notebook and gets more than 97% accuracy.


(Matthijs) #4

Ah yes, so he does. Well, I’d still try removing the last layer and training your model that way, and see if the results are any better.


(Akshay Kumar) #5

Sure, will try that and report back soon. :slight_smile: