Hello. In the End-to-End SGD Example in notebook 04, there is a function defined:
def apply_step(params, prn=True):
preds = f(time, params)
loss = mse(preds, speed)
loss.backward()
params.data -= lr * params.grad.data
params.grad = None
if prn: print(loss.item())
return preds
This function take the initial parameters as inputs, does one step of optimization, prints the new loss function and returns new parameters. Also, there is a global variable params
defined earlier:
params = torch.randn(3).requires_grad_()
params
Out: tensor([-0.7409, 0.3618, 1.9199], requires_grad=True)
When I try to call this function 10 times like that:
for i in range(10):
apply_step(params)
and check the value of a global variable params
, I see that it is now different than before!!
In: params
Out: tensor([0.1022, 0.4145, 1.9254], requires_grad=True)
I am a little bit confused why does this function modifies the value of global variable params
?