Improving loss? Is constantly adjusting hyper parameters normal?

Hello,

I have fallen into a strategy of improving my loss on my model by:

  1. Running a few hundred epochs.
  2. Saving the best loss and noting the result and hyperparameters
  3. Adjusting said hyperparameters (Learning rate, monument, decay).
  4. Reload best loss model
  5. Repeat 1.

If the model starts to overfit I increase the decay, if it does not seem to be learning I increase the learning rate etc. And with strategy I am by luck improving the loss; albeit slowly and painfully but it is getting better.

My question; is this a normal thing everyone does? Or is it a sign that I have the wrong model/optimizer etc? I feel my results should be repeatable if I started training a new model, but at the moment that’s clearly not the case.

Now, please note I confident my model is working; I can overfit on a few test cases and the output results on my validation set are not unreasonable (they make sense). I’ve read a number of articles and posts on improving loss and I think I’ve ticked most of those boxes.

I just need some clarification that I’m heading in the right direction.

Regards

Dave