E-GAN paper

A few weeks ago a new paper came out called E-GAN with a video accompanying it. I’ve read the paper and from what I could tell this seems to be far better than the existing GAN techniques/architectures. The LSUN bedrooms results look stunning and the promises of this paper are that with the help of evolutionary algorithms the issues inherent to GANs such as mode collapse, vanishing gradient, hyperparameters tunning and difficulty to measure the effectiveness of the network during training belongs to the past.

I personally experienced few of these issues myself when I made my own implementation of a super-resolution paper (“run and play” code here, blog post here and demo here ). I had to tweak a lot of things, restart 1 week of training from scratch, find tricks to make the nash equilibrium stable etc…

What do you guys think of this paper? Btw did you have the time to test this new architecture yourself @jeremy ? If no, based on the paper and your intuition do you think they are making solid claims and it’s worth the time diving into it to make an implementation? I really would like to plug it into my SRPGAN project to see what results I can get from it but last time I played with GANs it took me 1 month and half… literraly lol.
Thanks.

6 Likes

I haven’t looked at it - but if you think the results look good, then you should definitely give it a go and let us know what you find! In general, I’m a fan of evolutionary algorithms, and am glad they’re making a comeback.

2 Likes

Hi Ekami, great work. I wished to try your demo but its not working because of server error. can you plz figure out why is it

Hi Iram, sometimes the demo fails when trying to contact Algorithmia indeed. Could you try with a different image or 2 or 3 times?
That should work at some point.
Sorry for that.

I was trying it on grayscale low-resolution images that failed everytime. It works pretty well on RGB. Is it possible to make it work for 2D images?

Oh ok, good to know. Indeed I think I hardcoded the number of channels to be 3 (RGB) in the code. You’ll find the code here. I don’t think I actually have time to modify the code by myself. If you want to go the easy way I believe you could just convert your greyscale image to RGB and feed it to the online demo :slight_smile:

Thanks Ekami, will surely try :slight_smile: