DCGAN paper results

(Yash Katariya) #1

The authors of the DCGAN paper tell that they scaled the images to a range of [-1, 1] and used tanh as the activation function in the last layer of the generator. Also use the alpha(slope) of LeakyRelu as 0.2.

So I did these things and got rid of the Dense layers as the authors tell to eliminate the fully connected(Dense) layers.

Then I trained the model for 50 epochs. Each epoch takes around 35 secs on a Tesla K80. Look at the results that I got. Code for DCGAN:- https://github.com/yashk2810/DCGAN-Keras

(Hare Senpai) #2

It’s beautiful =) Nothing quite like seeing your baby come alive!

(Yash Katariya) #3

Ohh yes, it is indeed beautiful. I got these results after a lot of tries.

(jerry liu) #4

@yashkatariya I’ve often found many paper use tanh for activation, but my own experiments generally work well with relu. (and, less bugs due to scaling/clipping)

How did your results compare with relu instead of tanh ?

(Yash Katariya) #5

I haven’t tried out relu because my implementation was directly from the paper. Will try out relu and let you know :slight_smile:

(Yash Katariya) #6

@twairball I scaled the image to [0, 1] and applied relu activation on the last layer of the generator instead of tanh. These are the results that I got after training it for 50 epochs.

(jerry liu) #7

From the Radford, Metz, Chintala paper:

The ReLU activation (Nair & Hinton, 2010) is used in the generator with the exception of the output layer which uses the Tanh function. We observed that using a bounded activation allowed the model to learn more quickly to saturate and cover the color space of the training distribution Within the discriminator we found the leaky rectified activation (Maas et al., 2013) (Xu et al., 2015) to work well, especially for higher resolution modeling. This is in contrast to the original GAN paper, which used the maxout activation (Goodfellow et al., 2013).

So I guess thats the answer!

(Yash Katariya) #8

@twairball I tried out relu in the generator except for the last layer, but LeakyReLU with 0.2 alpha(slope)
works better in the generator.