This was just published a few hours ago, and looks very exciting!
The main idea is that they train a GAN at a low resolution first, then iteratively add higher and higher upscaling units. They show extremely convincing results at 1024x1024.
They mention that they use the improved WGAN technique, but that this is orthogonal to their technique (as evidence, they also demonstrate good results without WGAN).
They mention that by starting at low resolution and slowly ramping up, they are able to train faster (around 5x). This is pretty intuitive: it doesn’t make sense to start adding details before the core structure of the content is sound.
The paper is a pretty easy read, not math heavy. Highly recommend checking it out
I am trying to implement Progressive Growing of GAN using pytorch. The toy dataset I am using is MNIST. instead of celebA as used in the original paper.
While training when i try to calculate gradient using .backward() method on the discriminator loss.It stops and leads to segmentation fault.Click here for github repo of code
- 8 gb RAM
- running on cpu
- python 3.6
The network is quiet small, I doubt it is because of memory.
Hi, I reprodeced the PGGAN using CelebA dataset. Now I am going to use CelebA-HQ as well. It is PyTorch implementation, so it would be helpful to see the code here (https://github.com/nashory/pggan-pytorch)
@nashory great work! Looking forward to seeing the progress