Intermittent: nbdev adds same (wrong) MLP info/source to docs for nn.Module classes?

I’ve noticed that classes that sub-class from nn.Module, typically get the same irrelevant simple network code and commentary added to the docs. It’s always the same:

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

… note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

(this is NOT in my notebooks at all, I don’t know where it’s coming from)

Example where this happens:

Why is nbdev supplying code that doesn’t exist in the source?

here’s another spot where it happens: aeiou - datasets

here’s another: aeiou - datasets

Update: Ok, it looks like that’s happening for routines where I didn’t call super().__init__(). Interesting. I will add that to the places I linked to above and see if the unwanted text goes away…

Update 2: Nope. Adding the “super” did not make this unwanted ‘boilerplate’ go away.

@muellerzr wrote me back on Discord:

What happens if you move the docstring out of __init__ and under the class definition instead?

…Yes, of course. That was my problem for those three routines. Without a docstring provided for the class itself, the only docstring available is the one from the super-class.

Solved!

2 Likes