Ch4: Why do we convert a vector (rank-1 tensor) to a scalar (rank-0 tensor)?

In chapter 4 of the book, we have this code:

xt = tensor([3.,4.,10.]).requires_grad_()
xt

with this output:

tensor([ 3.,  4., 10.], requires_grad=True)

And then we try convert the vector (rank-1 tensor) to a scalar (rank-0 tensor):

def f(x): return (x**2).sum()

yt = f(xt)
yt

Output:

tensor(125., grad_fn=<SumBackward0>)

Why do we convert the vector (rank-1 tensor) to a scalar (rank-0 tensor)? Is this because a scalar value is needed to calculate gradients using backward()?