Learn.summary() equivalent in v3?

Hi,

Wondering if learn.summary() disappeared from v3? Any equivalent? The closest match I can find seems to be learn.model, but it doesn’t seem to show the boolean I’m after: trainable.

Thanks!

Maybe this thread can help you : https://forums.fast.ai/t/keras-like-summary-of-a-model/28845

This still does not solve which layer is trainable. But if your model isn’t freezed, all the layers with #parameters > 0 are trainable.

1 Like

Rather unfortunately, the link leads me to:

Sorry, you don’t have access to that topic!

Sorry, haven’t tought that you may not have access to part 3

Take a look at those links:

The first one is how to print the summary() using pure Pytorch and the second is a gist showing how to use the summary() in fastai (new feature pushed last week)

1 Like

Thank you!

Thank you, I just update for adding the trainable parameter into the summary.

in case you have not seen that yet,learn.summary() was reintroduced in version 1.0.30, so you can use that out of the box now. But it does not contain the trainable status you are looking for…

You are right: I haven’t found what I’ve been after. Basically I’d like to see the before-after of model creation, unfreezing, freezing, loading and fitting. I also tried the attribute learn.model[?][?].training but can’t make sense of it. Eg upon create_cnn the pretrained models resnet*, learn.model[0][*] appear to all have training=False, learn.model[1][*] appear to all have training=True. These training=True flags quickly disappeared following the first fit, and never returned when unfrozen, never returned at subsequent fits. The following is from ‘my own summary’, where the presence of an asterisk indicates training=True, otherwise training=False:

Upon create_cnn:

learn.model[0][0] Conv2d
learn.model[0][1] BatchNorm2d
learn.model[0][2] ReLU
learn.model[0][3] MaxPool2d
learn.model[0][4] Sequential
learn.model[0][4][000] —Sequential
learn.model[0][4][001] —BasicBlock
learn.model[0][4][002] —Conv2d
learn.model[0][4][003] —BatchNorm2d
learn.model[0][4][004] —ReLU
learn.model[0][4][005] —Conv2d
learn.model[0][4][006] —BatchNorm2d
learn.model[0][4][007] —BasicBlock
learn.model[0][4][008] —Conv2d
learn.model[0][4][009] —BatchNorm2d
learn.model[0][4][010] —ReLU
learn.model[0][4][011] —Conv2d
learn.model[0][4][012] —BatchNorm2d
learn.model[0][4][013] —BasicBlock
learn.model[0][4][014] —Conv2d
learn.model[0][4][015] —BatchNorm2d
learn.model[0][4][016] —ReLU
learn.model[0][4][017] —Conv2d
learn.model[0][4][018] —BatchNorm2d
learn.model[0][5] Sequential
learn.model[0][5][000] —Sequential
learn.model[0][5][001] —BasicBlock
learn.model[0][5][002] —Conv2d
learn.model[0][5][003] —BatchNorm2d
learn.model[0][5][004] —ReLU
learn.model[0][5][005] —Conv2d
learn.model[0][5][006] —BatchNorm2d
learn.model[0][5][007] —Sequential
learn.model[0][5][008] —Conv2d
learn.model[0][5][009] —BatchNorm2d
learn.model[0][5][010] —BasicBlock
learn.model[0][5][011] —Conv2d
learn.model[0][5][012] —BatchNorm2d
learn.model[0][5][013] —ReLU
learn.model[0][5][014] —Conv2d
learn.model[0][5][015] —BatchNorm2d
learn.model[0][5][016] —BasicBlock
learn.model[0][5][017] —Conv2d
learn.model[0][5][018] —BatchNorm2d
learn.model[0][5][019] —ReLU
learn.model[0][5][020] —Conv2d
learn.model[0][5][021] —BatchNorm2d
learn.model[0][5][022] —BasicBlock
learn.model[0][5][023] —Conv2d
learn.model[0][5][024] —BatchNorm2d
learn.model[0][5][025] —ReLU
learn.model[0][5][026] —Conv2d
learn.model[0][5][027] —BatchNorm2d
learn.model[0][6] Sequential
learn.model[0][6][000] —Sequential
learn.model[0][6][001] —BasicBlock
learn.model[0][6][002] —Conv2d
learn.model[0][6][003] —BatchNorm2d
learn.model[0][6][004] —ReLU
learn.model[0][6][005] —Conv2d
learn.model[0][6][006] —BatchNorm2d
learn.model[0][6][007] —Sequential
learn.model[0][6][008] —Conv2d
learn.model[0][6][009] —BatchNorm2d
learn.model[0][6][010] —BasicBlock
learn.model[0][6][011] —Conv2d
learn.model[0][6][012] —BatchNorm2d
learn.model[0][6][013] —ReLU
learn.model[0][6][014] —Conv2d
learn.model[0][6][015] —BatchNorm2d
learn.model[0][6][016] —BasicBlock
learn.model[0][6][017] —Conv2d
learn.model[0][6][018] —BatchNorm2d
learn.model[0][6][019] —ReLU
learn.model[0][6][020] —Conv2d
learn.model[0][6][021] —BatchNorm2d
learn.model[0][6][022] —BasicBlock
learn.model[0][6][023] —Conv2d
learn.model[0][6][024] —BatchNorm2d
learn.model[0][6][025] —ReLU
learn.model[0][6][026] —Conv2d
learn.model[0][6][027] —BatchNorm2d
learn.model[0][6][028] —BasicBlock
learn.model[0][6][029] —Conv2d
learn.model[0][6][030] —BatchNorm2d
learn.model[0][6][031] —ReLU
learn.model[0][6][032] —Conv2d
learn.model[0][6][033] —BatchNorm2d
learn.model[0][6][034] —BasicBlock
learn.model[0][6][035] —Conv2d
learn.model[0][6][036] —BatchNorm2d
learn.model[0][6][037] —ReLU
learn.model[0][6][038] —Conv2d
learn.model[0][6][039] —BatchNorm2d
learn.model[0][7] Sequential
learn.model[0][7][000] —Sequential
learn.model[0][7][001] —BasicBlock
learn.model[0][7][002] —Conv2d
learn.model[0][7][003] —BatchNorm2d
learn.model[0][7][004] —ReLU
learn.model[0][7][005] —Conv2d
learn.model[0][7][006] —BatchNorm2d
learn.model[0][7][007] —Sequential
learn.model[0][7][008] —Conv2d
learn.model[0][7][009] —BatchNorm2d
learn.model[0][7][010] —BasicBlock
learn.model[0][7][011] —Conv2d
learn.model[0][7][012] —BatchNorm2d
learn.model[0][7][013] —ReLU
learn.model[0][7][014] —Conv2d
learn.model[0][7][015] —BatchNorm2d
learn.model[0][7][016] —BasicBlock
learn.model[0][7][017] —Conv2d
learn.model[0][7][018] —BatchNorm2d
learn.model[0][7][019] —ReLU
learn.model[0][7][020] —Conv2d
learn.model[0][7][021] —BatchNorm2d

learn.model[1][0] AdaptiveConcatPool2d*
learn.model[1][0][000] —AdaptiveConcatPool2d*
learn.model[1][0][001] —AdaptiveAvgPool2d*
learn.model[1][0][002] —AdaptiveMaxPool2d*
learn.model[1][1] Lambda*
learn.model[1][2] BatchNorm1d*
learn.model[1][3] Dropout*
learn.model[1][4] Linear*
learn.model[1][5] ReLU*
learn.model[1][6] BatchNorm1d*
learn.model[1][7] Dropout*
learn.model[1][8] Linear*

Upon fit:

learn.model[0][0] Conv2d
learn.model[0][1] BatchNorm2d
learn.model[0][2] ReLU
learn.model[0][3] MaxPool2d
learn.model[0][4] Sequential
learn.model[0][4][000] —Sequential
learn.model[0][4][001] —BasicBlock
learn.model[0][4][002] —Conv2d
learn.model[0][4][003] —BatchNorm2d
learn.model[0][4][004] —ReLU
learn.model[0][4][005] —Conv2d
learn.model[0][4][006] —BatchNorm2d
learn.model[0][4][007] —BasicBlock
learn.model[0][4][008] —Conv2d
learn.model[0][4][009] —BatchNorm2d
learn.model[0][4][010] —ReLU
learn.model[0][4][011] —Conv2d
learn.model[0][4][012] —BatchNorm2d
learn.model[0][4][013] —BasicBlock
learn.model[0][4][014] —Conv2d
learn.model[0][4][015] —BatchNorm2d
learn.model[0][4][016] —ReLU
learn.model[0][4][017] —Conv2d
learn.model[0][4][018] —BatchNorm2d
learn.model[0][5] Sequential
learn.model[0][5][000] —Sequential
learn.model[0][5][001] —BasicBlock
learn.model[0][5][002] —Conv2d
learn.model[0][5][003] —BatchNorm2d
learn.model[0][5][004] —ReLU
learn.model[0][5][005] —Conv2d
learn.model[0][5][006] —BatchNorm2d
learn.model[0][5][007] —Sequential
learn.model[0][5][008] —Conv2d
learn.model[0][5][009] —BatchNorm2d
learn.model[0][5][010] —BasicBlock
learn.model[0][5][011] —Conv2d
learn.model[0][5][012] —BatchNorm2d
learn.model[0][5][013] —ReLU
learn.model[0][5][014] —Conv2d
learn.model[0][5][015] —BatchNorm2d
learn.model[0][5][016] —BasicBlock
learn.model[0][5][017] —Conv2d
learn.model[0][5][018] —BatchNorm2d
learn.model[0][5][019] —ReLU
learn.model[0][5][020] —Conv2d
learn.model[0][5][021] —BatchNorm2d
learn.model[0][5][022] —BasicBlock
learn.model[0][5][023] —Conv2d
learn.model[0][5][024] —BatchNorm2d
learn.model[0][5][025] —ReLU
learn.model[0][5][026] —Conv2d
learn.model[0][5][027] —BatchNorm2d
learn.model[0][6] Sequential
learn.model[0][6][000] —Sequential
learn.model[0][6][001] —BasicBlock
learn.model[0][6][002] —Conv2d
learn.model[0][6][003] —BatchNorm2d
learn.model[0][6][004] —ReLU
learn.model[0][6][005] —Conv2d
learn.model[0][6][006] —BatchNorm2d
learn.model[0][6][007] —Sequential
learn.model[0][6][008] —Conv2d
learn.model[0][6][009] —BatchNorm2d
learn.model[0][6][010] —BasicBlock
learn.model[0][6][011] —Conv2d
learn.model[0][6][012] —BatchNorm2d
learn.model[0][6][013] —ReLU
learn.model[0][6][014] —Conv2d
learn.model[0][6][015] —BatchNorm2d
learn.model[0][6][016] —BasicBlock
learn.model[0][6][017] —Conv2d
learn.model[0][6][018] —BatchNorm2d
learn.model[0][6][019] —ReLU
learn.model[0][6][020] —Conv2d
learn.model[0][6][021] —BatchNorm2d
learn.model[0][6][022] —BasicBlock
learn.model[0][6][023] —Conv2d
learn.model[0][6][024] —BatchNorm2d
learn.model[0][6][025] —ReLU
learn.model[0][6][026] —Conv2d
learn.model[0][6][027] —BatchNorm2d
learn.model[0][6][028] —BasicBlock
learn.model[0][6][029] —Conv2d
learn.model[0][6][030] —BatchNorm2d
learn.model[0][6][031] —ReLU
learn.model[0][6][032] —Conv2d
learn.model[0][6][033] —BatchNorm2d
learn.model[0][6][034] —BasicBlock
learn.model[0][6][035] —Conv2d
learn.model[0][6][036] —BatchNorm2d
learn.model[0][6][037] —ReLU
learn.model[0][6][038] —Conv2d
learn.model[0][6][039] —BatchNorm2d
learn.model[0][7] Sequential
learn.model[0][7][000] —Sequential
learn.model[0][7][001] —BasicBlock
learn.model[0][7][002] —Conv2d
learn.model[0][7][003] —BatchNorm2d
learn.model[0][7][004] —ReLU
learn.model[0][7][005] —Conv2d
learn.model[0][7][006] —BatchNorm2d
learn.model[0][7][007] —Sequential
learn.model[0][7][008] —Conv2d
learn.model[0][7][009] —BatchNorm2d
learn.model[0][7][010] —BasicBlock
learn.model[0][7][011] —Conv2d
learn.model[0][7][012] —BatchNorm2d
learn.model[0][7][013] —ReLU
learn.model[0][7][014] —Conv2d
learn.model[0][7][015] —BatchNorm2d
learn.model[0][7][016] —BasicBlock
learn.model[0][7][017] —Conv2d
learn.model[0][7][018] —BatchNorm2d
learn.model[0][7][019] —ReLU
learn.model[0][7][020] —Conv2d
learn.model[0][7][021] —BatchNorm2d

learn.model[1][0] AdaptiveConcatPool2d
learn.model[1][0][000] —AdaptiveConcatPool2d
learn.model[1][0][001] —AdaptiveAvgPool2d
learn.model[1][0][002] —AdaptiveMaxPool2d
learn.model[1][1] Lambda
learn.model[1][2] BatchNorm1d
learn.model[1][3] Dropout
learn.model[1][4] Linear
learn.model[1][5] ReLU
learn.model[1][6] BatchNorm1d
learn.model[1][7] Dropout
learn.model[1][8] Linear

Had no luck even in comparing weight tensors: they seem identical before and after model creation, unfreezing, freezing, loading and fitting. Can’t find the correct attribute or handle to see the before-after.

the thing you are looking for with the freezing/unfreezing is the .requires_grad=True/False of every parameter. Thats what gets set when freezing/unfreezing

check out the docs for discriminative layer training:
https://docs.fast.ai/basic_train#Discriminative-layer-training
and/or the source for the freezing stuff

or here in the pytorch docs
https://pytorch.org/docs/stable/notes/autograd.html#requires-grad

1 Like

Many thanks, Marc! Here’s today’s produce: before vs after freezing. Two columns of numbers for counts of requires_grad=False and of requires_grad=True, respectively.