1D GAN convergence

Hello! I am trying to build a 1D GAN to simulate some data (for the purpose of this post let’s say it’s a gaussian). What I need to simulate is the y-values of this (the x-axis is always the same). I tried to use the code in the wgan lecture (modified for 1D data), I also tried something much simpler with just a fully connected NN, without BN or convolutions, but it seems to not work. I noticed that if I use this line

for p in netD.parameters(): p.data.clamp_(-0.01, 0.01)

the generator ends up producing the same image all the time and the loss of D doesn’t really change. If I remove that line, the loss functions of both G and D go close to zero, but the output of G is just noise (I am not sure how can the loss of G be zero, but the output being just noise). My loss functions are defined like this:

real_loss = netD(real)
fake = netG(create_noise(real.size(0)))
fake_loss = netD(V(fake.data))
lossD = (-torch.log(real_loss)-torch.log(1-fake_loss)).mean(0)

and for the generator:

lossG = (torch.log(1-netD(netG(create_noise(bs))))).mean(0)

Can someone give me some advice on what to do? Thank you!