Hi All,
I’m trying to create a custom loss function that uses a bunch of numpy/scipy routines. I don’t want to re-write all those routines using pure torch because lack of complex number support in torch makes this a bit difficult.
To keep this as simple as possible here’s an example in torch that I’d like to implement in numpy/scipy. (My actual math is much more involved, but I just want to illustrate the torch -> numpy -> torch loss function)
class TorchLoss(torch.nn.Module):
def __init__(self):
super(TorchLoss, self).__init__()
def forward(self, x, y):
fr_mse = torch.mean((x-y)**2)
return fr_mse
This works just fine. However, when I try to do some numpy/scipy math, it breaks:
class NumpyLoss(torch.nn.Module):
def __init__(self):
super(NumpyLoss, self).__init__()
def forward(self, x, y):
with torch.no_grad():
xx = x.detach().cpu().numpy()
yy = y.detach().cpu().numpy()
fr_loss = np.sum((xx-yy)**2)
loss = tensor(fr_loss).float()
return loss
The error given is:
RuntimeError Traceback (most recent call last)
<ipython-input-11-1d0557931cd1> in <module>
8 plot_item(y_pred, h_inv)
9 model.zero_grad()
---> 10 loss.backward(create_graph=True)
11 with torch.no_grad():
12 for param in model.parameters():
~/anaconda3/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
164 products. Defaults to ``False``.
165 """
--> 166 torch.autograd.backward(self, gradient, retain_graph, create_graph)
167
168 def register_hook(self, hook):
~/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
97 Variable._execution_engine.run_backward(
98 tensors, grad_tensors, retain_graph, create_graph,
---> 99 allow_unreachable=True) # allow_unreachable flag
100
101
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
When I try to give the result a requires_grad, it fails in a different way:
class NumpyLoss(torch.nn.Module):
def __init__(self):
super(NumpyLoss, self).__init__()
def forward(self, x, y):
with torch.no_grad():
xx = x.detach().cpu().numpy()
yy = y.detach().cpu().numpy()
fr_loss = np.sum((xx-yy)**2)
loss = tensor(fr_loss).float()
loss.requires_grad=True
return loss
Which fails with the following error when attempting update:
TypeError Traceback (most recent call last)
<ipython-input-11-1d0557931cd1> in <module>
11 with torch.no_grad():
12 for param in model.parameters():
---> 13 param -= learning_rate * param.grad
TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'
I’ve been searching through the forums and I just don’t see how to do this right. I suspect this it comes from my fundamental lack of understanding of the autograd functionality. Anyway, is there a simple way to make this work in numpy?
Thanks,
-Caleb