I started to implement a simple GAN project, however I am getting the following error:
no implementation found for 'torch.tensor.__rsub__' on types that implement __torch_function__: [<class 'fastai.vision.gan.InvisibleTensor'>, <class 'fastai.torch_core.TensorImage'>]
I am uncertain if this is related to the issue on github.
I tested already different fastai versions, such as the current one from github. I tested the attached code on my local machine as well as on google colab. The attached file also includes the full stack trace as well as implementation details. I also tried to update pytorch via !pip install -U torch torchvision. gan_test.pdf (88.9 KB)
I would appreciate your help (also because it will be a secret Santa project )
fastai's BaseLoss class will do this, but this is needed if you don’t want to use that and instead just use raw pytorch by itself for the loss in these instances where x and y aren’t the same type
Should I just not downgrade, and try casting to TensorBase? Also, how to go about that because just casting to TensorBase instead of to InvisibleTensor seems to fail as well.
It keeps saying the Nvidia driver I have is too old for this version of pytorch, but the driver and torch version apparently match on the torch website.
Just in case anyone else is as slow as me and finds this thread later:
The solutions in this thread initially didn’t appear to work for me because I wasn’t able to fully grasp them at first. But what did work was to apply @muellerzr’s _contiguous re-casting idea to the .forward() of my critic model (which was, in retrospect, what he was recommending for general use cases). As in the following:
class MyCritic(nn.Module):
def __init__(self, im_chan=3, hidden_dim=64):
super(MyCritic, self).__init__()
self.crit = nn.Sequential(
...stuff....
)
def forward(self, x):
x = TensorBase(x.transpose(-1,-1).contiguous())
return self.crit(x)
my_critic = MyCritic(im_chan=3, hidden_dim=64)
(my_generator was already working fine)
learn = GANLearner.wgan(dls, my_generator, my_critic, crit_loss_func=calc_loss, opt_func=partial(Adam, mom=0.))