Model.parameters() is empty

I am having a hart time getting pytorch to recognize the parameters of my model. I have rewritten this code in a couple of different ways but no luck.

class NeuralTreeByHand2(nn.Module):
    def __init__(self, tree_depth=2, num_classes=10, ni=28*28):
        super().__init__()
        self.num_leaves = 2**tree_depth
        self.num_nodes = self.num_leaves -1
        self.tree_depth = tree_depth
        
        self.leaves = [nn.Linear(ni, num_classes) for i in range(self.num_leaves)]
        self.nodes = [nn.Linear(ni, 1) for i in range(self.num_nodes)]
        
    def forward(self, x):
        dp = [nn.Sigmoid(self.nodes[i](x)) for i in range(self.num_nodes)]
        prob = [F.log_softmax(self.leaves[i](x)) for i in range(self.num_leaves)]
        w = [dp[0]*dp[1], dp[0]*(1-dp[1]), (1-dp[0])*dp[2], (1-dp[0])*(1-dp[2])]
        return [(prob[i], w[i]) for i in range(4)]

here is

model = NeuralTreeByHand2().cuda()
print(model.parameters)

I get this

<bound method Module.parameters of NeuralTreeByHand2 (
)>

It is returning the method. You need to call it: print(model.parameters())

Here is an example that works

class Model2(nn.Module):
    def __init__(self):
        super(Model2, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
       x = F.relu(self.conv1(x))
       return F.relu(self.conv2(x))
m = Model2().cuda()
print(m.parameters)
<bound method Module.parameters of Model2 (
  (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(20, 20, kernel_size=(5, 5), stride=(1, 1))
)>

here is the original example

model = NeuralTreeByHand2().cuda()
print(list(model.parameters()))

gives

[ ]

When you do print(m.parameters), it returns the method - doesn’t call it - and then gives you the representation of the model (I think it is the __repr__ method in `torch.nn.modules.Module in pytorch source).

You would have similar effect if you were to just run print(model) or just execute a jupyter notebook cell only with model in it.

As for how parameters work with custom Modules, that I am not sure. I suspect they should be still somehow registered (I have mostly used nn.Sequential) and if so, the magic that does this probably lives here :slight_smile: (this is from nn.modules.module in PyTorch source):

    def __setattr__(self, name, value):
        def remove_from(*dicts):
            for d in dicts:
                if name in d:
                    del d[name]

        params = self.__dict__.get('_parameters')
        if isinstance(value, Parameter):
            if params is None:
                raise AttributeError(
                    "cannot assign parameters before Module.__init__() call")
            remove_from(self.__dict__, self._buffers, self._modules)
            self.register_parameter(name, value)
        elif params is not None and name in params:
            if value is not None:
                raise TypeError("cannot assign '{}' as parameter '{}' "
                                "(torch.nn.Parameter or None expected)"
                                .format(torch.typename(value), name))
            self.register_parameter(name, value)
        else:
            modules = self.__dict__.get('_modules')
            if isinstance(value, Module):
                if modules is None:
                    raise AttributeError(
                        "cannot assign module before Module.__init__() call")
                remove_from(self.__dict__, self._parameters, self._buffers)
                modules[name] = value
            elif modules is not None and name in modules:
                if value is not None:
                    raise TypeError("cannot assign '{}' as child module '{}' "
                                    "(torch.nn.Module or None expected)"
                                    .format(torch.typename(value), name))
                modules[name] = value
            else:
                buffers = self.__dict__.get('_buffers')
                if buffers is not None and name in buffers:
                    if value is not None and not torch.is_tensor(value):
                        raise TypeError("cannot assign '{}' as buffer '{}' "
                                        "(torch.Tensor or None expected)"
                                        .format(torch.typename(value), name))
                    buffers[name] = value
                else:
                    object.__setattr__(self, name, value)

If parameters are not registered and they normally would be for those custom modules, my best guess would be that it is because you are keeping modules in a list. I think the setter method then tries to get the ‘_parameters’ attribute from the list (which normally would live on a Module I suppose) and as it fails to get anything meaningful to it, it is unable to register the parameters for them to be later returned when you call the parameters() method.

1 Like

@radek is on the right track. To fix this, wrap the list in nn.ModuleList().

1 Like

Thank you!

Just that change worked!

self.leaves =  nn.ModuleList([nn.Linear(ni, num_classes) for i in range(self.num_leaves)])
self.nodes =  nn.ModuleList([nn.Linear(ni, 1) for i in range(self.num_nodes)])
2 Likes