Oh so you want the activations at the final layer? Try this

```
def return_sequential(layer_num, model):
return nn.Sequential(
*list(model.children())[:layer_num]
)
class get_activation_layer(nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
self.layer_models = []
for i in range(len(self.model)):
self.layer_models.append(return_sequential(i, self.model))
def forward(self, x):
self.outputs = []
for i in range(len(self.model)):
self.outputs.append(self.layer_models[i](x))
return self.outputs
```

Then

```
tmp_model = get_activation_layer(learn.model)
layer_outputs = tmp_model(V(i))
```

Where i is your input image as a tensor. Then `layer_output`

will have the activation tensors at each layer.