Share your work here ✅

Very interesting @r2d2. Based on your code then I play around with the hook.callback and find that we can extract the activation of the last layer by the way below (which used the hook_output of fast.ai that sgugger suggest).

    last_layer = flatten_model(learn.model)[-3]
    hook = hook_output(last_layer)
    learn.model.eval()
    n_valid = len(data.valid_ds.ds.y)
    for i in range(n_valid):
        img,label = data.valid_dl.dl.dataset[i]
        img = apply_tfms(learn.data.valid_ds.tfms, img, **learn.data.valid_ds.kwargs)
        ds = TensorDataset(img.data[None], torch.zeros(1))
        dl = DeviceDataLoader.create(ds, bs=1, shuffle=False, device=learn.data.device, tfms=learn.data.valid_dl.tfms,
                                         num_workers=0)
        pred = learn.model(dl.one_batch()[0])
        if i % 1000 == 0:
            print(f'{i/n_valid*100:.2f}% ready')
        if i == 0 :
            acts = hook.stored 
        else : acts = torch.cat((acts,hook.stored), dim=0) 

I can’t find the Image.predict anymore. With that function, the code will be more compact. About HookCallback I don’t know how to use it yet :D. Because we want to save the activations in the validation set so I’m not sure if we can add a callback after we have already trained the learner. Need to read more the source code.

p/s: I guess it is useful for you too @MicPie :smiley:

2 Likes