Ok so your underlying question. How to interpret << >>
Not an expert but my assumptions have been
- Typically validation loss should be similar to but slightly higher than training loss. As long as validation loss is lower than or even equal to training loss one should keep doing more training.
- If training loss is reducing without increase in validation loss then again keep doing more training
- If validation loss starts increasing then it is time to stop
- If overall accuracy still not acceptable then review mistakes model is making and think of what can one change:
- More data? More / different data augmentations? Generative data?
- Different architecture?