Create custom output layer to Resnet in CNN learner

I want to create a CNN learner that is a resnet architecture plus an extra layer at the end that normalizes the output of the resnet for problem specific reasons. What is the canonical way of doing this?

I have defined by custom output layer as:

class CustomOutput(nn.Module):
    def __init__(self):
        super().__init__()
        
    def forward(self, xb):
        return answer_probability(xb)

The following runs, but it seems a bit hacky:

learner = cnn_learner(data, models.resnet50, metrics=rmse)
learner.model = nn.Sequential(learner.model, CustomOutput())

(I’m also unsure if the frozen layers part of the Learner operates correctly after this).

I have also tried this:

class CustomCNN(nn.Module):
    def __init__(self, arch=models.resnet34):
        super().__init__()
        self.body = create_body(arch)
        self.cnn = nn.Sequential(self.body, CustomOutput())
    
    def forward(self, xb):
        return self.cnn(xb)
    
learner = cnn_learner(data, CustomCNN, metrics=rmse)

But this fails with error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-46-a5adf3bbf097> in <module>
----> 1 learner = cnn_learner(data, model, metrics=rmse)

/opt/conda/envs/fastai/lib/python3.6/site-packages/fastai/vision/learner.py in cnn_learner(data, base_arch, cut, pretrained, lin_ftrs, ps, custom_head, split_on, bn_final, init, concat_pool, **kwargs)
     95     meta = cnn_config(base_arch)
     96     model = create_cnn_model(base_arch, data.c, cut, pretrained, lin_ftrs, ps=ps, custom_head=custom_head,
---> 97         split_on=split_on, bn_final=bn_final, concat_pool=concat_pool)
     98     learn = Learner(data, model, **kwargs)
     99     learn.split(split_on or meta['split'])

/opt/conda/envs/fastai/lib/python3.6/site-packages/fastai/vision/learner.py in create_cnn_model(base_arch, nc, cut, pretrained, lin_ftrs, ps, custom_head, split_on, bn_final, concat_pool)
     81         split_on:Optional[SplitFuncOrIdxList]=None, bn_final:bool=False, concat_pool:bool=True):
     82     "Create custom convnet architecture"
---> 83     body = create_body(base_arch, pretrained, cut)
     84     if custom_head is None:
     85         nf = num_features_model(nn.Sequential(*body.children())) * (2 if concat_pool else 1)

/opt/conda/envs/fastai/lib/python3.6/site-packages/fastai/vision/learner.py in create_body(arch, pretrained, cut)
     53 def create_body(arch:Callable, pretrained:bool=True, cut:Optional[Union[int, Callable]]=None):
     54     "Cut off the body of a typically pretrained `model` at `cut` (int) or cut the model as specified by `cut(model)` (function)."
---> 55     model = arch(pretrained)
     56     cut = ifnone(cut, cnn_config(arch)['cut'])
     57     if cut is None:

<ipython-input-44-c2079208eac2> in __init__(self, arch)
      2     def __init__(self, arch=models.resnet34):
      3         super().__init__()
----> 4         self.body = create_body(arch)
      5         self.cnn = nn.Sequential(self.body, GalaxyOutput())
      6 

/opt/conda/envs/fastai/lib/python3.6/site-packages/fastai/vision/learner.py in create_body(arch, pretrained, cut)
     53 def create_body(arch:Callable, pretrained:bool=True, cut:Optional[Union[int, Callable]]=None):
     54     "Cut off the body of a typically pretrained `model` at `cut` (int) or cut the model as specified by `cut(model)` (function)."
---> 55     model = arch(pretrained)
     56     cut = ifnone(cut, cnn_config(arch)['cut'])
     57     if cut is None:

TypeError: 'bool' object is not callable

You could probably make it a custom head that you use instead, then pass that in as your head.

Thanks for the quick response. I’m not so familiar with the software design here. What is a ‘head’ and ‘body’ of a network in fastai/pytorch? How would I create a custom head and then use it?

cnn_learner does have a custom_head argument. It can pass that head to create_cnn_model, which returns nn.Sequential(body, head)

I think the freezing will happen correctly either way.

You can use a callback which gets called after end of each batch

Thanks! This is a step in the right direction. I tried this, but it removes the default head and replaces it with just my custom layer. What I really wanted was to modify the default head by adding my own custom layer to the end of it. I played with using create_head, but it requires a lot of ‘insider’ knowledge of the cnn body and you end up having to basically reimplement cnn_learner. The best hack I found was just:

learner = cnn_learner(data, models.resnet50, metrics=rmse, ps=0.3)
learner.model[-1] = nn.Sequential(*learner.model[-1], CustomOutput())
1 Like

That’s a hacky solution, but it works.
You could also modify create_head to add your custom layer at the end. (it is the same thing you do in a more cleang way).
Also a callback could be a cleaner implementation.
A head can also be something very simple:


def head(in_c, out_c):
    return nn.Sequential(AdaptiveConcatPool2d(),
                         Flatten(),
                         nn.BatchNorm1d(2*in_c),
                         nn.Dropout(p=0.25),
                         nn.Linear(2*in_c, 512),
                         nn.ReLU(inplace=True),
                         nn.BatchNorm1d(512),
                         nn.Dropout(p=0.2),
                         nn.Linear(512, out_c),
                         CustomLayer()
                        )

1 Like

Thank you. This is cleaner, but I’m not sure if there is a clean way to determine in_c because it depends on what body is used. You could create the default network and then see what its value is, and then create the custom head.

That is what I usually do. The value just to be clear, is the number of channels from the last Convolutional layer. (512, 1024 or 2048 for Resnet18, 34 and 50). So the input for the first linear layer is 2x that since we are using ConcatPool (concatenation of average+maxpool)

1 Like

Fair enough. Thanks for your help!

Hello Everyone. I have a multi label classification problem where the classes are like this 2;3, 4;3, 0;9, 5;6, etc. I want to build a neural network which outputs two arrays at the end like this [0,0,0,1,0,0,0,0,0], [0,0,1,0,0,0,0,0,0]. Then it calculates the loss for each of this labels separately, then adds up the loss and then does a backward.

I could implement this using simple pytorch, where just made a new forward function to make the neural network output 2 different answers/classes/labels. And then made a simple loop where it calculated both the losses separately and added them up and the did the backward function.

But I have spent like 10 hours now and still can’t seem to figure out how to implement this in fasai. Can anyone help me please

If you can write it as a PyTorch layer then you can write your own custom head and stick your layer on the end instead of the default output, like I did. This would have your 2x9D output and 512/1024/etc input. I think you would then need to write a custom loss function to handle this 2x9 tensor as you described. Your custom head and custom loss function you pass to the CNN learner in fastai as parameters. I’ve not tried this, but hope it helps you get a little further.

1 Like

I used your code, and got the first part right. Now the learner is getting a 2X9D output. WHewww. Thanks man.

And I really didn’t think about writing a custom loss function. Thanks for the idea. Going to try this now. Hope it solves the issue.

So I was trying to make a custom loss fn. But what happens is that this is a multilabel classification problem, where there are classes are like 2;3, 0;19, etc. And according to my loss_fn I want y_true to be a tensor of size [2, 10]. But what fastai is doing is , it is mixing all the 20 classes and outputting y_true as a tensor of size[1,20]. How do I make y_true output a tensor of size [2,10] instead of size [1,20] ???