I need to get the gradients of a model after it was trained, but not really sure how to do it from a learner object, as under the hood no_grad() is being called.
I’ve as input a Tensor image
>>> image.shape
torch.Size([1, 3, 224, 224])
Now when I try to get the gradients as follows:
>>> target = learn_inf.model(image)
>>> target.backward()
>>> target.grads()
I get the following error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-132-c19c64c0208c> in <module>()
1 target = learn_inf.model(image)
----> 2 target.backward()
3 target.grads()
2 frames
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
193 products. Defaults to ``False``.
194 """
--> 195 torch.autograd.backward(self, gradient, retain_graph, create_graph)
196
197 def register_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 grad_tensors = list(grad_tensors)
92
---> 93 grad_tensors = _make_grads(tensors, grad_tensors)
94 if retain_graph is None:
95 retain_graph = create_graph
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in _make_grads(outputs, grads)
32 if out.requires_grad:
33 if out.numel() != 1:
---> 34 raise RuntimeError("grad can be implicitly created only for scalar outputs")
35 new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format))
36 else:
RuntimeError: grad can be implicitly created only for scalar outputs
What’s the proper way to get the gradients?