Distributed learning GAN

Hello, I’ve had no issues using distributed learning a simple CNN implementation, but for some reason, I am unable to use more than one GPU on a GANLearner.

Here is an example of what I’m doing with the learner:

    learn = GANLearner.wgan(data, generator, critic, switch_eval=False,
                    opt_func=partial(optim.Adam, betas=(0., 0.99)), wd=0.)
learn = learn.to_distributed(local_rank)
    learn.fit(epochs, 2e-4)

This is what the stack says:
AttributeError: 'GANLearner' object has no attribute 'to_distributed'