Hi, I’m trying to create a heatmap such as the one in Lesson 6 with a different dataset, but I can’t get it working properly. The hook into the end of the model’s convolutional part seems to be the part causing the issue. For some reason it only returns an activation map the size of 1x1 pixels. I have checked that the model has been loaded correctly and the minibatch is also loaded just fine.
Could the small size of input images be the cause? They are 32x32 afterall, perhaps shrinking the activation map to 1x1. Any thoughts how to start debugging this? Below is the function where the mistake seems to happen and further below the full heatmap code as copied from the lesson notebook.
def hooked_backward(cat=y):
with hook_output(m[0]) as hook_a:
with hook_output(m[0], grad=True) as hook_g:
# xb.shape: torch.Size([1, 3, 32, 32]), as it should be
preds = m(xb)
#preds.shape: torch.Size([1, 2]), as it should be
preds[0,int(cat)].backward()
# hook_a.stored.shape: torch.Size([1, 512, 1, 1])
# Uh-oh! It only has 1 pixel!
return hook_a,hook_g
Rest of the code:
from fastai.callbacks.hooks import *
def show_heatmap(hm):
_,ax = plt.subplots()
xb_im.show(ax)
ax.imshow(hm, alpha=0.6, extent=(0,size[0],size[1],0),
interpolation='bilinear', cmap='magma');
m = learn.model.eval();
idx=0
x,y = data.valid_ds[idx]
xb,_ = data.one_item(x)
xb_im = Image(data.denorm(xb)[0])
xb = xb.cuda()
# Here the hooking function is called
hook_a,hook_g = hooked_backward()
# -------------------------------
acts = hook_a.stored[0].cpu()
acts.shape
avg_acts = acts.mean(0)
avg_acts.shape
show_heatmap(avg_acts)