Return gradient wrt inputs during prediction

How can I return the gradients with respect to each input during prediction, once I’ve trained the model in a learner with fastaiv2?

Have a look here: https://github.com/fastai/fastbook/blob/master/18_CAM.ipynb

Thanks! That definitely helps get a starting point but it doesn’t seem to have a very consistent interface with the grad function from autograd, so I can’t see an easy way to get higher order derivatives with respect to some of the inputs. Do you have any suggestions how to do the equivalent of grad(grad(x)) with those hooks?

No I haven’t tried doing anything with higher-order gradients. Best off checking the PyTorch forums - we’re just using the built-in PyTorch functionality so there’s nothing special about fastai here.

Thanks. Indeed seems like a pytorch issue. The example in the notebook works fine for first order and one by one inputs, but seems not to be supported for batches without aggregating the loss which is not what I want.

here’s how to do it for a single input if someone needs the solution:

x = learn.dls.valid.one_batch()[1]
x = x.requires_grad_(True).cuda()
output = learn.model.eval()(None, x)
output[0].backward()
x.grad[0]

and ref pytorch question in case the batch case get’s resolved: