Missing forward pass in video2 taught by Jeremy

So jeremy is teaching training loop. I previous videos we learned that’s a forward pass…loss calculation and then there’s backward pass

for epoch in range(epochs):
    for i in range((n-1)//bs+1):
        # n-1 don't understand  782 -- should be 784
        # set_traces
        start_i = i*bs
        end_i = start_i + bs
        xb = x_train[start_i:end_i]
        yb = y_train[start_i:end_i]
        loss = loss_func(model(xb), yb)
        
        loss.backward()
        with torch.no_grad():
            for l in model.layers:
                if hasattr(l, 'weight'):
                    l.weight -= l.weight.grad * lr
                    l.bias -= l.bias.grad * lr
                    l.weight.grad.zero_()
                    l.bias  .grad.zero_()

but in this piece of code that jeremy wrote there’s no forward pass. I can see backward pass…
where weight - gradients & bias - grad

But where exactly are the passes?

Also, why is l.bias can have space .grad.zero_()

shouldn’t it be continuous like l.bias.grad.zero_()???

The forward pass is done implictly in the line that calculates the loss function:

loss = loss_func(model(xb), yb)

Here, the loss_func input model(xb) runs the forward pass and computes the predicted labels.

Equivalently, that line could be replaced by the following two lines:

`preds = model(xb)`
`loss = loss_func(preds,yb)`

Hope that clarifies what’s going on!

And to your last question, you are correct,

'l.bias  .grad.zero_()' is equivalent to 'l.bias.grad.zero_()'