Finally, I can conclude that sometimes the discriminator is not able to pick our intention to separate generated images from fake images. This is a very common problem with GANs (i.e., mode collapse). Why I came to this conclusion because there are many things that are in generated images which are not in real images and one of it is our desired attribute. When I tried making discriminator more complex this problem is sorted… We are encountering this problem because we are training discriminator after training generator( for every batch). Another interesting observation was, I made generator more complex with UNET and stuff and trained the model for more epochs then the generator started fooling discriminator by producing random colored dots(yellow in my case) so that it is easy for discriminator to distinguish between fake and real images(as fake/generated images have these colored dots) and in this model I got very very less discriminator loss though generator and id loss were high.Lesson learned: Proper complexity of generator and discriminator also matters in GANs.
Complexity in my words mean more deep network.
And I have also implemented UNET with cycleGAN and the results were pretty good.
Ping me for further details🙃
Thanks fastai