Problem: The validation accuracy and loss are not improving after a certain point.
Running epoch: 1
Epoch 1/1
360/360 [==============================] - 86s 238ms/step - loss: 1.4263 - acc: 0.8710 - val_loss: 0.9621 - val_acc: 0.9167
Running epoch: 2
Epoch 1/1
360/360 [==============================] - 83s 231ms/step - loss: 1.4721 - acc: 0.8888 - val_loss: 1.1278 - val_acc: 0.9153
Running epoch: 3
Epoch 1/1
360/360 [==============================] - 84s 233ms/step - loss: 1.4769 - acc: 0.8931 - val_loss: 1.0051 - val_acc: 0.9263
Running epoch: 4
Epoch 1/1
360/360 [==============================] - 85s 235ms/step - loss: 1.4978 - acc: 0.8945 - val_loss: 1.1413 - val_acc: 0.9160
Running epoch: 5
Epoch 1/1
360/360 [==============================] - 90s 251ms/step - loss: 1.4907 - acc: 0.8975 - val_loss: 1.1125 - val_acc: 0.9230
Completed 5 fit operations
Here’s the current github branch I’m working on. I have ported the code in Keras 2 wherever needed.
Things I tried:
- Increase the epochs from 3 to 10 and then finalized on 5 because after that the loss just started getting worse.
- Increase the learning rate from 0.001 to 0.01, but it just sped up my training speed, keeping the numbers in question the same.
Things I could try:
- Since I re-arranged the data a bit differently, can it serve as a factor for the difference in @jeremy’s training (which was around 97% accuracy) and mine? I could try to directly run the model on the already arranged data provided on the platform.
- I’ll add a tensorboard callback to give more detailed information and post here.
Any suggestions as to why this is happening and what could I do it improve/fix it?