Hi everyone,

I’m trying to duplicate the solver Jeremy uses in Part 1, Lesson 4 (1:08), but in Python and not Excel. You can find details of the original lecture in @PoonamV’s lecture notes, about 2/3rds of the way down the page:

I’d like to understand how this works on a more basic level so I’m trying to translate to Py.

Code is largely stolen from this https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson2-sgd.ipynb

```
from fastai.basics import tensor, nn
import torch, numpy
def hypothesis(vertLatents,horiLatents):
return numpy.dot(vertLatents,horiLatents)
def mse(y_hat,y):
return ((y_hat-y)**2).mean()
def update():
y_hat = tensor(hypothesis(vertLatents,horiLatents))
y_hat.requires_grad = True
loss = mse(y_hat,blockData)
if t%10 == 0: print(t,'-------------',loss)
loss.backward()
with torch.no_grad():
a.sub_(lr * a.grad())
a.grad.zero_()
vecLatents = 10
shape = (20,14)
blockData = tensor(numpy.random.random_sample(shape))
horiLatents = numpy.random.random_sample((vecLatents,shape[1]))
vertLatents = numpy.random.random_sample((shape[0],vecLatents))
a = nn.Parameter(tensor(hypothesis(vertLatents,horiLatents)))
print(a)
lr = 1e-1
for t in range(100):
update()
```

This code fails. I think because I’m not calculating the gradient of `a`

. I can see this when I print out `a`

– it doesn’t print the `grad`

, even though I can print out `a.requires_grad`

and that evaluates to `True`

. The error is here:

Traceback (most recent call last):

File “./TestOptimize.py”, line 40, in

update()

File “./TestOptimize.py”, line 24, in update

a.sub_(lr * a.grad())

TypeError: ‘NoneType’ object is not callable

Any ideas of what I’m doing wrong?

NB: I am on lesson 5, halfway through, so if this is spelled out in a subsequent lesson I’d be happy to be pointed that way.

TIA,

Rajan