Sharing the discriminator's prediction in GAN loss

I know that some GAN training procedures reuse the fake image when calculating the loss for D and G (with fake.detach() within D’s loss calculation).

I was wondering, are there consequences to sharing the prediction of the fake images from the discriminator for the generator’s loss term?

Some pseudo-code to illustrate:
fake_pred = D(fakes) # forward pass only once through D
# calculating Gan loss for discriminator
loss_D = GanLoss(fake_pred, is_fake=True)
loss_D.backward(retain_graph=True)
loss_D.step()
[…]
# calculating Gan loss for generator
loss_G = GanLoss(fake_pred, is_fake=False)
loss_G.backward()
loss_G.step()
[…] # zero out gradients before next pass

So instead of having the discriminator’s forward run for fake images during the discriminator’s optimization and then AGAIN with the gradients turned off for the generator, could you just reuse the prediction from the fake when optimizing the generator?

The reason I imagine you would want to do this would be for speed (one less forward pass through the discriminator)