I have fallen into a strategy of improving my loss on my model by:
- Running a few hundred epochs.
- Saving the best loss and noting the result and hyperparameters
- Adjusting said hyperparameters (Learning rate, monument, decay).
- Reload best loss model
- Repeat 1.
If the model starts to overfit I increase the decay, if it does not seem to be learning I increase the learning rate etc. And with strategy I am by luck improving the loss; albeit slowly and painfully but it is getting better.
My question; is this a normal thing everyone does? Or is it a sign that I have the wrong model/optimizer etc? I feel my results should be repeatable if I started training a new model, but at the moment that’s clearly not the case.
Now, please note I confident my model is working; I can overfit on a few test cases and the output results on my validation set are not unreasonable (they make sense). I’ve read a number of articles and posts on improving loss and I think I’ve ticked most of those boxes.
I just need some clarification that I’m heading in the right direction.