CycleGan train_loss stops decreasing

I’m training CycleGan (using coursev3 sample) to create pencil sketch portraits. After training 50 epoch results are plausible but not perfect.
My dataset has 1200 trainA, 1200 trainB images.
What’re the proper approaches I can try when train_loss stop decreasing and starts increasing while training CycleGan. Should I continue with a smaller learning rate, bigger dataset, give more importance to cyclic_loss?

I continued training I guess it doesn’t stick in the local minima bc it continues decreasing after fluctuation. I guess there is still way for training. I’ll continue training.

What I have done lately is experimenting. I use lr_find to find a good learning rate.
learn.lr_find()
learn.recorder.plot()

  1. find a good learning rate
  2. apply learning rate it with fit_one_cycle
  3. unfreeze the model
  4. use lr_find again
  5. train some more with fit and then see how it goes.
  6. clean the dataset/increase the dataset and train again using the step above
  7. compare the result with the previous trainings