Bug(?): plot_top_losses() RuntimeError

When I use plot_top_losses() I get one image as the output regardless of the value of k followed by this RuntimeError:
I’ve tried restarting my kernel and trying again but that didn’t help either.

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-26-776eb7830b8b> in <module>
----> 1 interp.plot_top_losses(9)

~/fastai/fastai/vision/learner.py in _cl_int_plot_top_losses(self, k, largest, figsize, heatmap, heatmap_thresh, return_fig)
    155                 with hook_output(m[0], grad= True) as hook_g:
    156                     preds = m(xb)
--> 157                     preds[0,cl].backward()
    158             acts = hook_a.stored[0].cpu()
    159             if (acts.shape[-1]*acts.shape[-2]) >= heatmap_thresh:

~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
    100                 products. Defaults to ``False``.
    101         """
--> 102         torch.autograd.backward(self, gradient, retain_graph, create_graph)
    103 
    104     def register_hook(self, hook):

~/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     88     Variable._execution_engine.run_backward(
     89         tensors, grad_tensors, retain_graph, create_graph,
---> 90         allow_unreachable=True)  # allow_unreachable flag
     91 
     92 

RuntimeError: expected scalar type Half but found Float

You either need to put your learner back in full precision (learn = learn.to_fp32()) or to pass heatmap=False. The function tries to compute the grad cam, but this doesn’t work for FP16.

Ah I see, that fixed it; thanks!