How to add additional layers in any pretrained model like resnet34?

arch = models.resnet34
I want to add some additional layers in my resnet model nad i tried to do like this after the above line of code.
arch.fc = nn.Linear(512,20)
and then
learn = cnn_learner(data, arch, metrics=accuracy)

But when i check the summary of my model then i didn’t find the extra layer(nn.Linear(512,20) that i had add in my model.

from torchsummary import summary
summary(learn.model, input_size=(3, 128, 128))

Anybody who know how to add extra layers in any pretrained model so that after creating my model i just need to write this line:
learn = cnn_learner(data, arch, metrics=accuracy)

cnn_learner is handy when you’ve little modifications in the architecture and want to make use of fastai custom_head. Still, to get this working you need to do following changes:

  1. Don’t create instance of arch, cnn_learner needs a callable function to work with rather than concrete nn.Module

    So you want to pass in resnet34 only and not the arch

  2. define your custom head like so:

fc_head = nn.Sequential(Flatten(),nn.Linear(512,20))

Assuming you’re using a resnet34 from torchvision, you need to add this Flatten layer since they manage to do this in their forward call and we get rid of that by passing in architecture to cnn_learner

  1. Now, there are few more bits you need to take care of (split,cut) but are not essential in your use-case

A final cnn_learner call like this would do your job:

learn = cnn_learner(data, resnet34,custom_head=fc_head, metrics=accuracy)

Let me know if this works for you

4 Likes

Thanks it work.
But when i try to move forward in my project the main difficulty is to convert this code from pytorch to fastai.

class Quantumnet(nn.Module):

    def __init__(self):

        super().__init__()

        self.pre_net = nn.Linear(512, n_qubits)

        self.q_params = nn.Parameter(q_delta * torch.randn(max_layers * n_qubits))

        self.post_net = nn.Linear(n_qubits, len(filtered_classes))

    def forward(self, input_features):

        pre_out = self.pre_net(input_features) 

        q_in = torch.tanh(pre_out) * np.pi / 2.0   

        

        # Apply the quantum circuit to each element of the batch, and append to q_out

        q_out = torch.Tensor(0, n_qubits)

        q_out = q_out.to(device)

        for elem in q_in:

            q_out_elem = q_net(elem,self.q_params).float().unsqueeze(0)

            q_out = torch.cat((q_out, q_out_elem))

        return self.post_net(q_out)

model_hybrid = torchvision.models.resnet18(pretrained=True)

for param in model_hybrid.parameters():

param.requires_grad = False

model_hybrid.fc = Quantumnet()

The thing is that I had created a model i.e. Quantumnet in pytorch and then add this network to the final layer of resnet18 in pytorch and my desired model is created. But now i want to use fastai and try to compare the results.
So , how can we do this?

As I said earlier, you can’t do like this if you want to use cnn_learner. If you want to do it this way, I’d recommend using Learner instead.

Main advantage of using cnn_learner is as follows:

  1. fastai custom head (also, automatically inferring no. of classes from DataLoaders)
Sequential(
  (0): AdaptiveConcatPool2d(
    (ap): AdaptiveAvgPool2d(output_size=1)
    (mp): AdaptiveMaxPool2d(output_size=1)
  )
  (1): Flatten(full=False)
  (2): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (3): Dropout(p=0.25, inplace=False)
  (4): Linear(in_features=512, out_features=512, bias=False)
  (5): ReLU(inplace=True)
  (6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (7): Dropout(p=0.5, inplace=False)
  (8): Linear(in_features=512, out_features=20, bias=False)
)
  1. Add normalize transform if you haven’t already
  2. Initialize these new layers using Kaiming Normal initializer

Rest of the functionality is managed by Leaner class, so you’ll get that anyways. Now, coming to your task. You can simply pass your model_hybrid object to Learner and it will work. But, as I mentioned, weights of your new Module won’t be initialized by fastai anymore. Also, you won’t be able to take advantage of Discriminative Learning rates. To do both, try doing:

model_hybrid = torchvision.models.resnet18(pretrained=True)
cust_head = Quantumnet()
apply_init(cust_head)
model_hybrid.fc = cust_head

def hybrid_splitter(m): return L(m.features,m.fc).map(params)

learn = Learner(data,model_hybrid,splitter=hybrid_splitter,...)

This will freeze the layers in m.features and will only train your custom head

by default, when you call learn.freeze(), only last param group is kept trainable and rest of the model is freezed

3 Likes

There is an error coming : apply_init() missing 1 required positional argument: ‘init_func’
when apply_init(cust_head) is runned

And also there is no splitter inside Learner function

Are you using fastai v1 ?

EDIT: If so, you need to make following changes

  1. use cnn_learner only
  2. splitter is split_on in v1 and works bit differently. So need to change that
  3. apply_init’s 2nd parameter is optional in v2 and has default value nn.init.kaiming_normal_, so you need to explicitly specify that

To summarize, your updated cnn_learner would look like

cust_head = Quantumnet()
apply_init(cust_head,nn.init.kaiming_normal_)
learn = cnn_learner(data,resnet34,cut=-1, split_on:lambda m: (m[0][6],m[1]),
        custom_head=cust_head)

By cut=-1, you’re removing the original fc of the model, by split_on, you’ll have total 3 splits for discriminative learning rates and you used “kaiming normal” to initialize your dense layers of custom_head

Refer this notebook to learn more

1 Like

Thank you so much for your attention and helping in my work.
I had able to complete my work only bcz of your kind help.
Thanks again!!

1 Like