I noticed in the Lesson 2 that each time Jeremy created a model, the computations were put in the __call__ method.
However, when I did the PyTorch tutorials, they were putting them in the forward method.
By comparing the two approaches (see here) and looking at the doc, I found that they were behaving the same way (in both case the model can be treated as a function).
So is there any reason to do one over the other ? Is there something related to fastai ?
That’s what makes this part 2 really exciting ! I don’t only learn about fastai and Pytorch, but also about pure Python and how good software is designed.
I get that in Lesson 8, we were building everything from scratch so you had to create that __call__ to get the common function behaviour y = model(x). But I was more referring to this kind of piece of code:
class Model(nn.Module):
def __init__(self, n_in, nh, n_out):
super().__init__()
self.layers = [nn.Linear(n_in,nh), nn.ReLU(), nn.Linear(nh,n_out)]
def __call__(self, x):
for l in self.layers: x = l(x)
return x
As we now inherit from the nn.Module, should we still use __call__ to define our forward pass ?
I took a look in the PyTorch source code and saw that the __call__ method calls the forward pass: result = self.forward(*input, **kwargs). So I guess that it is safer to put the forward pass computations in the forward method