So jeremy is teaching training loop. I previous videos we learned that’s a forward pass…loss calculation and then there’s backward pass

```
for epoch in range(epochs):
for i in range((n-1)//bs+1):
# n-1 don't understand 782 -- should be 784
# set_traces
start_i = i*bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
loss = loss_func(model(xb), yb)
loss.backward()
with torch.no_grad():
for l in model.layers:
if hasattr(l, 'weight'):
l.weight -= l.weight.grad * lr
l.bias -= l.bias.grad * lr
l.weight.grad.zero_()
l.bias .grad.zero_()
```

but in this piece of code that jeremy wrote there’s no forward pass. I can see backward pass…

where `weight - gradients`

& `bias - grad`

But where exactly are the passes?

Also, why is `l.bias`

can have space `.grad.zero_()`

shouldn’t it be continuous like `l.bias.grad.zero_()`

???