How to cut feature extractor layer (base model) from leaner (resnet)?

I’m trying to split a cnn_learner resnet 34 model, into the feature extraction part (until flatten()) and the rest, to use the output of that feature extraction part as input into another model.

Currently, I’m doing:

learn = cnn_learner(data, models.resnet34, metrics=[accuracy], pretrained=True, 
                callback_fns=[CSVLogger])

layers = split_model(learn.model,learn.model[0])[0][:93]
learn.layer_groups = layers

Below are the last five layers in learn.layer_groups now:

(88): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(89): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(90): AdaptiveAvgPool2d(output_size=1)
(91): AdaptiveMaxPool2d(output_size=1)
(92): Flatten()`

But now, learn.get_preds() outputs the prediction of classes instead of the 1024 x 1 feature vector (output of flatten layer). And head isn’t even present in the learn.layer_groups

Just quickly answering from memory…

AFAIK, layer_groups is a fastai construction used to implement learning rates by layer group and perhaps to aid in splitting off the head of an existing model. Assigning a new value to it does not affect the model used by the Learner.

You’ll need to alter the model and assign the new model to learn.model. Or construct a new Learner from it.

HTH, and that it’s right enough!

I think what you are looking for is create_body(), create_head(), you probably want to check this 2 default functions in fastai doc

To be honest, if you want to customize your model, you will need to call Learner() to create your learner instead of using cnn_learn()

Once you done with creating learner, you will need to do the split part, and use learn.layer_groups to check if your split is correct.

Or, you can try run the model without discrimative learning rate, that way you don’t have to split your model.

Hope this helps :slight_smile:

1 Like