In this paper there is a plot of how the loss of gans looks through epochs.

Figure 2:

These are of course averaged losses.

How can both the discriminator loss and generator loss decrease?

The paper uses GANs for super-resolution so it has an extra L1 loss. I do not think that would impact the relationship between the adversarial losses.

```
real_y = discriminator(real_sample)
fake_y = discriminator(generator(noise))
discriminator_loss = real_y-fake_y+1 # +1 makes it look nicer, between 0 and 2
fake_y = discriminator(generator(noise))
generator_loss = fake_y
```

I would expect one of the losses to increase as the other one decreases. Since they use the same calculation of `fake_y`

and one decreases `-fake_y`

and the other `fake_y`

. One optimizer is making `fake_y`

less and the other optimizer is making it more.

Maybe the loss functions arenât calculated like I said.

In the widely used analogy:

In simple terms the generator is like a forger trying to produce some counterfeit material, and the discriminator is like the police trying to detect the forged items.

We can measure how good the police is by how many times out of 100 fakes he can identify the real and fake ones and the forger as deceiving the policeman out of 100 fakes.

Wouldnât that mean that if the police get better, the forger gets worse? (if using the aforementioned measure of how good they are)

Therefore we wouldnât be able to see both of them being good at the same time! But the graph from the paper indicates otherwise. I must be missing something!