Using pretrained weights in custom model

I am currently working on implementation of this paper Fully Convolutional Networks for Semantic Segmentation. I am first trying to build a FCN-32 architecture. I am using VGG16 pretrained model for training PASCAL VOC 2012 dataset . I am having a doubt regarding how should I use pretrained weights of VGG16 in my custom class implementation’s feature’s section. I have created a code snippt which demonstrates the above scenario. Can anyone tell me if this implementation of using pretrained weights in custom model correct ?


#loading the pretrained VGG16 model

model1 = models.vgg16(pretrained=True)

 

#Freezing the layers except the fc layers

for param in model1.features.paramters():

    param.requires_grad = False

   

#Creating FCN custom module:

class FCN(nn.Module):

    def __init__(self):

        super(FCN,self).__init__()

        self.features = nn.Sequential(*list(model1.features.children()))

        self.classifier = nn.Sequential(nn.Conv2d(512,4096,7),

                            nn.Dropout(),

                            nn.Conv2d(4096,21,1),

                            nn.Dropout(),

                            nn.ConvTranspose2d(21,21,224,stride=32)

                        )

    def forward(self,x):

        x = self.features(x)

        x = self.classifier(x)

        return x

       

model2 = FCN()

 

#Again freezing the feature's layer of model2

for params in model2.features.parameters():

    params.requires_grad = False

   

#For confirming that all the pretrained weights from vgg16 are transfered to FCN custom model. must return true for confirmation

print(list(model2.features.parameters()) == list(model1.features.parameters()))

It looks like all you’re doing is adding the self.classifier at the end of vgg16. The better method would be to create a ConvLearner() object like this:

class FCN(nn.Module):
    def __init__(self):
        super(FCN,self).__init__()
        self.classifier = nn.Sequential(nn.Conv2d(512,4096,7),
                            nn.Dropout(),
                            nn.Conv2d(4096,21,1),
                            nn.Dropout(),
                            nn.ConvTranspose2d(21,21,224,stride=32)

                        )

    def forward(self,x):
        x = self.classifier(x)
        return x

learn = ConvLearner(data, tvm.vgg16, metrics=accuracy, custom_head=FCN)

@poppingtonic can you tell me how to implement this in pytorch as my complete implementation is in pytorch ?