Loss functions for WGAN

The way fastai do it is that: the WassersteinLoss = critic(real) - critic(fake) and NoopLoss = critic(fake).

But if these loss functions are minimized, don’t they encourage the critic to output 0 for real images and 1 for fake images? In fact, the original WGAN paper proposed to maximize the WassersteinLoss, hence the + sign in the gradient update.

I was confused by this. Can anyone please explain to me if I’m wrong?

1 Like

I also want to the logic beind this.