I was going through the code of GAN impelentation to compare v2 and v1 to see if there are any differences. I noticed that v1 zeros gradients after the batch (gan.py 127line) in GANTrainer
def on_batch_end(self, **kwargs):
self.opt_critic.zero_grad()
self.opt_gen.zero_grad()
while I can’t see zeroing gradients in v2 at any point. What is the reason for that or am I overlooking something?