Difference between `forward` and `__call__` methods

Hi everybody !

I noticed in the Lesson 2 that each time Jeremy created a model, the computations were put in the __call__ method.

However, when I did the PyTorch tutorials, they were putting them in the forward method.

By comparing the two approaches (see here) and looking at the doc, I found that they were behaving the same way (in both case the model can be treated as a function).

So is there any reason to do one over the other ? Is there something related to fastai ?

Have a look at lesson 8, where I showed how forward is a small refactoring of the duplicate code in __call__.

Because we’re rebuilding everything from scratch, you can see exactly where everything comes from! :slight_smile:

1 Like

That’s what makes this part 2 really exciting ! I don’t only learn about fastai and Pytorch, but also about pure Python and how good software is designed.

I get that in Lesson 8, we were building everything from scratch so you had to create that __call__ to get the common function behaviour y = model(x). But I was more referring to this kind of piece of code:

    class Model(nn.Module):
      def __init__(self, n_in, nh, n_out):
        super().__init__()
        self.layers = [nn.Linear(n_in,nh), nn.ReLU(), nn.Linear(nh,n_out)]
        
      def __call__(self, x):
        for l in self.layers: x = l(x)
        return x 

As we now inherit from the nn.Module, should we still use __call__ to define our forward pass ?


I took a look in the PyTorch source code and saw that the __call__ method calls the forward pass: result = self.forward(*input, **kwargs). So I guess that it is safer to put the forward pass computations in the forward method :slight_smile:

1 Like

Keep looking further, and you’ll see we define forward. Listen to the video again to find out why.