So we mentioned that a typical reason for validation accuracy being lower than training accuracy was overfitting. I also assume that when the opposite is true it’s because my model is underfitting the data.
My question is in a few parts
- Is my assumption above true? Val. Acc > Train Acc. implies Underfitting?
- What are the key techniques to avoiding underfitting, besides training more and reducing dropout?
- How do I choose the model I want to run on my test data? Can I just pick the output with the highest validation accuracy?
EDIT: For example, the output from two different epochs on Redux:
Epoch 2: loss: 0.4074 - acc: 0.9744 - val_loss: 0.2066 - val_acc: 0.9868
Epoch 3: loss: 0.3865 - acc: 0.9757 - val_loss: 0.3739 - val_acc: 0.9768