PyTorch - Best way to get at intermediate layers in VGG and ResNet?

(WG) #1

Just getting started with transfer learning in PyTorch and was wondering …

What is the recommended way(s) to grab output at intermediate layers (not just the last layer)?

In particular, how should one pre-compute the convolutional output for VGG16 … or get the output of ResNet50 BEFORE the global average pooling layer?

(WG) #2

Here is at least one way to do it … thoughts?

res50_model = models.resnet50(pretrained=True)
res50_conv = nn.Sequential(*list(res50_model.children())[:-2])

This grabs a pretrained resnet50 model courtesy of the torchvision package and then builds a sequential model based on it that excludes the final two modules (e.g., the one that does average pooling and the fully connected one)

for param in res50_conv.parameters():
    param.requires_grad = False

No need to backprop through the model since I’m using it purely for feature extraction.

inputs, labels = next(iter(dataloders['train']))
inputs, labels = Variable(inputs), Variable(labels)
outputs = res50_conv(inputs)

To test, I grab 4 examples and run them through my modified model. # => torch.Size([4, 2048, 7, 7])

And voila, I get the 2048x7x7 output I expected!

It’s feels both weird and cool to be able to pass in images of any size into the network and that it just works. I burnt a few minutes here and there trying to get the model to tell me the output size of this layer or that layer until I realized it only works for the fully connected layers because, I believe, those are the only ones that do have a definitive input and output shape. The convolutional layers inputs/outputs shape will b dynamic based on the shape of your examples … which like I said, feels both weird coming from using Theano/TF but also very cool.

(WG) #3

Here is another way inspired from this forum post:

class ResNet50Bottom(nn.Module):
    def __init__(self, original_model):
        super(ResNet50Bottom, self).__init__()
        self.features = nn.Sequential(*list(original_model.children())[:-2])
    def forward(self, x):
        x = self.features(x)
        return x

res50_model = models.resnet50(pretrained=True)
res50_conv2 = ResNet50Bottom(res50_model)

outputs = res50_conv2(inputs)  # => torch.Size([4, 2048, 7, 7])

Also, forgot to mention that I’m tweaking things from the transfer learning tutorial on the pytorch website. Check it out to understand the dataset and other particulars.

(Krishna Vishal V ) #6

To get output of any layer while doing a single forward pass, you can use register_forward_hook.

outputs= []
def hook(module, input, output):

res50_model = models.resnet50(pretrained=True)
out = res50_model(res)
out = res50_model(res1)

Here in the outputs you get a list with two tensors in it. Those tensors are outputs of that particular layer for each forward pass.