# Getting #weights from RNN_Learner

Hi,
is it possible to get the number of weights used by the classifier?
I tried using learn.model which gives me something like this:
SequentialRNN(
(0): MultiBatchRNN(
(encoder_with_dropout): EmbeddingDropout(
)
(rnns): ModuleList(
(0): WeightDrop(
(module): LSTM(400, 1150)
)
(1): WeightDrop(
(module): LSTM(1150, 1150)
)
(2): WeightDrop(
(module): LSTM(1150, 400)
)
)
(dropouti): LockedDropout(
)
(dropouths): ModuleList(
(0): LockedDropout(
)
(1): LockedDropout(
)
(2): LockedDropout(
)
)
)
(1): PoolingLinearClassifier(
(layers): ModuleList(
(0): LinearBlock(
(lin): Linear(in_features=1200, out_features=50, bias=True)
(drop): Dropout(p=0.2)
(bn): BatchNorm1d(1200, eps=1e-05, momentum=0.1, affine=True)
)
(1): LinearBlock(
(lin): Linear(in_features=50, out_features=39, bias=True)
(drop): Dropout(p=0.1)
(bn): BatchNorm1d(50, eps=1e-05, momentum=0.1, affine=True)
)
)
)
)
Therefore I can read the dimensions of the different weight matrices and can calculate the overall used weights. Is that approach correct or is there a better approach ?

Best,
René

Assume you’ve got a trained learner called “learn”:

``````# Get the layers
layers = [l for l in learn.model.layers]
# Get parameters for layer 0
pars0 = [p for p in layers.parameters()]
# These are the weights
pars0
# These are the dimensions
pars.shape``````
1 Like