A few weeks ago a new paper came out called E-GAN with a video accompanying it. I’ve read the paper and from what I could tell this seems to be far better than the existing GAN techniques/architectures. The LSUN bedrooms results look stunning and the promises of this paper are that with the help of evolutionary algorithms the issues inherent to GANs such as mode collapse, vanishing gradient, hyperparameters tunning and difficulty to measure the effectiveness of the network during training belongs to the past.
I personally experienced few of these issues myself when I made my own implementation of a super-resolution paper (“run and play” code here, blog post here and demo here ). I had to tweak a lot of things, restart 1 week of training from scratch, find tricks to make the nash equilibrium stable etc…
What do you guys think of this paper? Btw did you have the time to test this new architecture yourself @jeremy ? If no, based on the paper and your intuition do you think they are making solid claims and it’s worth the time diving into it to make an implementation? I really would like to plug it into my SRPGAN project to see what results I can get from it but last time I played with GANs it took me 1 month and half… literraly lol.