No with torch.no_grad() in optim.SGD.step

I was digging in optim.SGD.step and was expecting with torch.no_grad(), but it is absent!
instead, I found

    def step(self, closure=None):
        """Performs a single optimization step.

            closure (callable, optional): A closure that reevaluates the model
                and returns the loss.
        loss = None
        if closure is not None:
            loss = closure()

        for group in self.param_groups:
            weight_decay = group['weight_decay']
            momentum = group['momentum']
            dampening = group['dampening']
            nesterov = group['nesterov']

            for p in group['params']:
                if p.grad is None:
                d_p =
                if weight_decay != 0:
                if momentum != 0:
                    param_state = self.state[p]
                    if 'momentum_buffer' not in param_state:
                        buf = param_state['momentum_buffer'] = torch.zeros_like(
                        buf = param_state['momentum_buffer']
                        buf.mul_(momentum).add_(1 - dampening, d_p)
                    if nesterov:
                        d_p = d_p.add(momentum, buf)
                        d_p = buf

      ['lr'], d_p)

        return loss

Does it mean that using “.data” is like just getting values of grad/parameters and can be used instead of with torch.no_grad. If so why is this way better?

.data is available for backwards compataibility in pytorch. It was used earlier (the same time optim.SGD was written). It is not recommended to use .data in pytorch, instead use torch.no_grad.

1 Like

.data() is a reference to the value of a variable/tensor in Pytorch, and any operation done on .data is NOT tracked by PyTorch’s autograd. I actually don’t know if there is any advantage of this over with torch.no_grad(). Maybe someone else may be able to answer that.

Other than if .data is just removed from Pytorch in the future, as @kushaj pointed out.

Thanks @PalaashAgrawal & @kushaj