# Gradient won't lower loss

Can you please look at the following code and let me know why my loss doesn’t improve?

``````# 28X28 weights
w = tensor([random.random()-0.5 for i in range(28*28)]).requires_grad_()

def sigmoid(x):
return 1/(1+torch.exp(-x))
def predict(x):
return sigmoid(x@w.T+b)
def error(pred,y):

lr=1

print ('starting loss:'+str(error(predict(train_X),train_y).mean()))
for x,y in dl:
error(predict(x),y).mean().backward()
print('change in grad:'+str(a.sum())+' change in w:'+str(w.sum()))
w.data -= lr*a
print ('final loss'+str(error(predict(train_X),train_y).mean()))
``````

Here’s the result of running the last loop. as you can see the weights do change but the overall loss doesn’t.

change in grad:tensor(-0.0512) change in w:tensor(4.7489, grad_fn=)
change in grad:tensor(-0.1743) change in w:tensor(4.8002, grad_fn=)
change in grad:tensor(-0.1798) change in w:tensor(4.9744, grad_fn=)
change in grad:tensor(0.0791) change in w:tensor(5.1542, grad_fn=)
change in grad:tensor(-0.0504) change in w:tensor(5.0751, grad_fn=)
change in grad:tensor(0.0676) change in w:tensor(5.1255, grad_fn=)
change in grad:tensor(-0.0116) change in w:tensor(5.0579, grad_fn=)
change in grad:tensor(-0.1402) change in w:tensor(5.0694, grad_fn=)
change in grad:tensor(0.0881) change in w:tensor(5.2097, grad_fn=)
change in grad:tensor(0.0187) change in w:tensor(5.1215, grad_fn=)
change in grad:tensor(-0.1067) change in w:tensor(5.1029, grad_fn=)
change in grad:tensor(-0.0685) change in w:tensor(5.2095, grad_fn=)
change in grad:tensor(0.0827) change in w:tensor(5.2780, grad_fn=)
change in grad:tensor(0.) change in w:tensor(5.1954, grad_fn=)
change in grad:tensor(-0.0134) change in w:tensor(5.1954, grad_fn=)
change in grad:tensor(-0.0310) change in w:tensor(5.2087, grad_fn=)
change in grad:tensor(0.0160) change in w:tensor(5.2397, grad_fn=)
change in grad:tensor(0.1495) change in w:tensor(5.2238, grad_fn=)
change in grad:tensor(0.0106) change in w:tensor(5.0743, grad_fn=)
change in grad:tensor(0.0690) change in w:tensor(5.0637, grad_fn=)
change in grad:tensor(0.0298) change in w:tensor(4.9947, grad_fn=)
change in grad:tensor(-0.1751) change in w:tensor(4.9649, grad_fn=)
change in grad:tensor(-0.0475) change in w:tensor(5.1400, grad_fn=)
change in grad:tensor(0.0978) change in w:tensor(5.1875, grad_fn=)
change in grad:tensor(-0.0923) change in w:tensor(5.0897, grad_fn=)
change in grad:tensor(0.1903) change in w:tensor(5.1820, grad_fn=)
change in grad:tensor(0.0948) change in w:tensor(4.9917, grad_fn=)
change in grad:tensor(-0.1095) change in w:tensor(4.8969, grad_fn=)
change in grad:tensor(-0.0346) change in w:tensor(5.0064, grad_fn=)
change in grad:tensor(-0.0943) change in w:tensor(5.0410, grad_fn=)
change in grad:tensor(-0.0962) change in w:tensor(5.1354, grad_fn=)
change in grad:tensor(0.0682) change in w:tensor(5.2316, grad_fn=)
change in grad:tensor(-0.0953) change in w:tensor(5.1634, grad_fn=)
change in grad:tensor(0.0631) change in w:tensor(5.2587, grad_fn=)
change in grad:tensor(0.) change in w:tensor(5.1956, grad_fn=)
change in grad:tensor(0.0603) change in w:tensor(5.1956, grad_fn=)
change in grad:tensor(0.0427) change in w:tensor(5.1353, grad_fn=)
change in grad:tensor(0.0474) change in w:tensor(5.0925, grad_fn=)
change in grad:tensor(0.0746) change in w:tensor(5.0452, grad_fn=)
change in grad:tensor(-0.1346) change in w:tensor(4.9706, grad_fn=)
change in grad:tensor(-0.2334) change in w:tensor(5.1052, grad_fn=)
change in grad:tensor(-0.1048) change in w:tensor(5.3386, grad_fn=)
change in grad:tensor(0.) change in w:tensor(5.4434, grad_fn=)
change in grad:tensor(0.) change in w:tensor(5.4434, grad_fn=)
change in grad:tensor(0.0214) change in w:tensor(5.4434, grad_fn=)
change in grad:tensor(0.1048) change in w:tensor(5.4219, grad_fn=)
change in grad:tensor(-0.1088) change in w:tensor(5.3172, grad_fn=)
change in grad:tensor(-0.0883) change in w:tensor(5.4260, grad_fn=)
change in grad:tensor(-0.1138) change in w:tensor(5.5143, grad_fn=)

I think because w and b are simple PyTorch tensors, not autograd variables. Try converting predict() to a Module with w and b as parameters defined in `__init__()`.

There may also be a problem differentiating where(). You will have to see whether backward complains.

Thanks.
w and b were defined as autograd. I edited the post to reflect that.

Hi yanivbl,

I put your code into a notebook and it worked fine. What I see as a possible explanation is that the first print statement will mess up the gradients, and that the lr is 1. Also, w.sum() shows a null grad_fn. You may have accidentally created a pathological case.

Here are the changes I made:

``````bs = 10
train_X = torch.rand((bs,784))  #Simple X and y
train_y = torch.rand((bs))

lr= .1  # Reduce lr
x=train_X
y=train_y

with torch.no_grad():  #Do not affect gradients of w and b
print ('starting loss:'+str(error(predict(train_X),train_y).mean()))
for i in range(10):
error(predict(x),y).mean().backward()
print('change in grad:'+str(a.sum())+' change in w:'+str(w.sum()))
w.data -= lr*a
print ('final loss:'+str(error(predict(train_X),train_y).mean()))

starting loss:tensor(0.1521)
change in grad:tensor(23.5624) change in w:tensor(-7.2704, grad_fn=<SumBackward0>)
change in grad:tensor(16.0539) change in w:tensor(-9.6266, grad_fn=<SumBackward0>)
change in grad:tensor(13.4880) change in w:tensor(-11.2320, grad_fn=<SumBackward0>)
change in grad:tensor(9.7620) change in w:tensor(-12.5808, grad_fn=<SumBackward0>)
change in grad:tensor(6.7026) change in w:tensor(-13.5570, grad_fn=<SumBackward0>)
change in grad:tensor(4.8993) change in w:tensor(-14.2273, grad_fn=<SumBackward0>)
change in grad:tensor(3.8159) change in w:tensor(-14.7172, grad_fn=<SumBackward0>)
change in grad:tensor(3.1121) change in w:tensor(-15.0988, grad_fn=<SumBackward0>)
change in grad:tensor(2.6230) change in w:tensor(-15.4100, grad_fn=<SumBackward0>)
change in grad:tensor(2.2649) change in w:tensor(-15.6723, grad_fn=<SumBackward0>)
final loss:tensor(0.0052)
``````

Please let me know what you figure out.

1 Like

Hey @Pomo,

Really appreciate your help. I’ve been struggling with this a whole day and still have no clue as to where the problem lies…

I ran your code and it worked perfectly but mine still fails, even after using nograd.
I suspect my problem has to do with the dataset. Would greatly appreciate if you can have a look.

``````#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()

#hide
from fastai.vision.all import *
from fastbook import *

matplotlib.rc('image', cmap='Greys')

path = untar_data(URLs.MNIST_SAMPLE)

Path.BASE_PATH = path

three_path = (path/'train'/'3').ls()
seven_path = (path/'train'/'7').ls()

threes = [tensor(Image.open(i)).float()/255 for i in three_path]
sevens = [tensor(Image.open(i)).float()/255 for i in seven_path]

#stacking
stacked_3 = torch.stack(threes)
stacked_7 = torch.stack(sevens)

#flattens stacked
flat_3 = stacked_3.view(len(stacked_3),28*28)
flat_7 = stacked_7.view(len(stacked_7),28*28)

#create dataset of train X & y by concat 7s and 3s
train_X = torch.cat((flat_3,flat_7))
train_y = tensor([0]*len(flat_3)+[1]*len(flat_7)).unsqueeze(1)

#Merge X and y into a single dataset
dset= list(zip(train_X,train_y))

# Generate 28X28 weights
w = tensor([random.random()-0.5 for i in range(28*28)]).requires_grad_()

#define optimization functions
def sigmoid(x):
return 1/(1+torch.exp(-x))
def predict(x):
return sigmoid(x@w.T+b)
def error(pred,y):

lr=0.1

with torch.no_grad():  #Do not affect gradients of w and b
print ('starting loss:'+str(error(predict(train_X),train_y).mean()))
for x,y in dl:
error(predict(x),y).mean().backward()
print('change in grad:'+str(a.sum())+' change in w:'+str(w.sum()))
w.data -= lr*a
print ('final loss:'+str(error(predict(train_X),train_y).mean()))

#Pomo's code

bs = 10
train_X = torch.rand((bs,784))  #Simple X and y
train_y = torch.rand((bs))

lr= .1  # Reduce lr
x=train_X
y=train_y

with torch.no_grad():  #Do not affect gradients of w and b
print ('starting loss:'+str(error(predict(train_X),train_y).mean()))
for i in range(10):
error(predict(x),y).mean().backward()
print('change in grad:'+str(a.sum())+' change in w:'+str(w.sum()))
w.data -= lr*a
print ('final loss:'+str(error(predict(train_X),train_y).mean()))
``````

Results of my run:
starting loss:tensor(0.5004)
change in grad:tensor(1.1957) change in w:tensor(6.5544, grad_fn=)
change in grad:tensor(-1.2987) change in w:tensor(6.4348, grad_fn=)
change in grad:tensor(0.4178) change in w:tensor(6.5647, grad_fn=)
change in grad:tensor(-0.6468) change in w:tensor(6.5229, grad_fn=)
change in grad:tensor(1.2509) change in w:tensor(6.5876, grad_fn=)
change in grad:tensor(0.5383) change in w:tensor(6.4625, grad_fn=)
change in grad:tensor(-0.6558) change in w:tensor(6.4087, grad_fn=)
change in grad:tensor(-1.2341) change in w:tensor(6.4743, grad_fn=)
change in grad:tensor(-1.0741) change in w:tensor(6.5977, grad_fn=)
change in grad:tensor(-0.1329) change in w:tensor(6.7051, grad_fn=)
change in grad:tensor(0.1312) change in w:tensor(6.7184, grad_fn=)
change in grad:tensor(0.9670) change in w:tensor(6.7052, grad_fn=)
change in grad:tensor(0.6903) change in w:tensor(6.6085, grad_fn=)
change in grad:tensor(0.6477) change in w:tensor(6.5395, grad_fn=)
change in grad:tensor(-0.6241) change in w:tensor(6.4747, grad_fn=)
change in grad:tensor(0.5215) change in w:tensor(6.5372, grad_fn=)
change in grad:tensor(-1.3369) change in w:tensor(6.4850, grad_fn=)
change in grad:tensor(-1.4484) change in w:tensor(6.6187, grad_fn=)
change in grad:tensor(-0.7693) change in w:tensor(6.7635, grad_fn=)
change in grad:tensor(0.9769) change in w:tensor(6.8405, grad_fn=)
change in grad:tensor(-0.1305) change in w:tensor(6.7428, grad_fn=)
change in grad:tensor(-0.7845) change in w:tensor(6.7558, grad_fn=)
change in grad:tensor(0.2718) change in w:tensor(6.8343, grad_fn=)
change in grad:tensor(-0.2590) change in w:tensor(6.8071, grad_fn=)
change in grad:tensor(0.) change in w:tensor(6.8330, grad_fn=)
change in grad:tensor(0.5236) change in w:tensor(6.8330, grad_fn=)
change in grad:tensor(-0.6891) change in w:tensor(6.7807, grad_fn=)
change in grad:tensor(0.1382) change in w:tensor(6.8496, grad_fn=)
change in grad:tensor(0.3899) change in w:tensor(6.8358, grad_fn=)
change in grad:tensor(-0.1378) change in w:tensor(6.7968, grad_fn=)
change in grad:tensor(-2.3996) change in w:tensor(6.8105, grad_fn=)
change in grad:tensor(0.2762) change in w:tensor(7.0505, grad_fn=)
change in grad:tensor(-0.7645) change in w:tensor(7.0229, grad_fn=)
change in grad:tensor(-0.6706) change in w:tensor(7.0993, grad_fn=)
change in grad:tensor(-1.9395) change in w:tensor(7.1664, grad_fn=)
change in grad:tensor(-0.6970) change in w:tensor(7.3603, grad_fn=)
change in grad:tensor(-0.6688) change in w:tensor(7.4300, grad_fn=)
change in grad:tensor(0.6635) change in w:tensor(7.4969, grad_fn=)
change in grad:tensor(0.6754) change in w:tensor(7.4306, grad_fn=)
change in grad:tensor(-1.8707) change in w:tensor(7.3630, grad_fn=)
change in grad:tensor(1.0975) change in w:tensor(7.5501, grad_fn=)
change in grad:tensor(-0.6484) change in w:tensor(7.4403, grad_fn=)
change in grad:tensor(0.5459) change in w:tensor(7.5052, grad_fn=)
change in grad:tensor(-0.5194) change in w:tensor(7.4506, grad_fn=)
change in grad:tensor(0.6706) change in w:tensor(7.5025, grad_fn=)
change in grad:tensor(-0.5109) change in w:tensor(7.4355, grad_fn=)
change in grad:tensor(-0.2626) change in w:tensor(7.4866, grad_fn=)
change in grad:tensor(0.2643) change in w:tensor(7.5128, grad_fn=)
change in grad:tensor(1.2660) change in w:tensor(7.4864, grad_fn=)
final loss:tensor(0.4998)

Results from your run
starting loss:tensor(0.8796)
change in grad:tensor(17.1915) change in w:tensor(7.3598, grad_fn=)
change in grad:tensor(25.7356) change in w:tensor(5.6406, grad_fn=)
change in grad:tensor(44.7939) change in w:tensor(3.0671, grad_fn=)
change in grad:tensor(51.9667) change in w:tensor(-1.4123, grad_fn=)
change in grad:tensor(26.0332) change in w:tensor(-6.6090, grad_fn=)
change in grad:tensor(12.1227) change in w:tensor(-9.2123, grad_fn=)
change in grad:tensor(7.2295) change in w:tensor(-10.4246, grad_fn=)
change in grad:tensor(5.0969) change in w:tensor(-11.1475, grad_fn=)
change in grad:tensor(3.9322) change in w:tensor(-11.6572, grad_fn=)
change in grad:tensor(3.2014) change in w:tensor(-12.0504, grad_fn=)
final loss:tensor(0.0071)

I believe I’ve found the error

Turns out I defined w as a 1 dimension tensor instead of 2.
I did this:
`w = tensor([random.random()-0.5 for i in range(28*28)]).requires_grad_()`

`w = tensor([random.random()-0.5 for i in range(28*28)]).view(28*28,-1).requires_grad_()`

WOW - That’s one lesson I will never forget.

@Pomo thanks again for your help. Deeply appreciated

Here are the updated results

``````print(validate_epoch(linear1), end=' ')
for x,y in dl:
error(predict(x),y).mean().backward()
print('change in grad:'+str(a.sum())+' change in w:'+str(weights.sum()))
weights.data -= lr*a
print(validate_epoch(linear1), end=' ')
``````

change in grad:tensor(0.1617) change in w:tensor(6.5544, grad_fn=)
change in grad:tensor(-1.8141) change in w:tensor(6.5382, grad_fn=)
change in grad:tensor(1.4818) change in w:tensor(6.7196, grad_fn=)
change in grad:tensor(-0.9886) change in w:tensor(6.5715, grad_fn=)
change in grad:tensor(-0.7500) change in w:tensor(6.6703, grad_fn=)
change in grad:tensor(-0.3553) change in w:tensor(6.7453, grad_fn=)
change in grad:tensor(-0.7791) change in w:tensor(6.7809, grad_fn=)
change in grad:tensor(1.6533) change in w:tensor(6.8588, grad_fn=)
change in grad:tensor(0.7571) change in w:tensor(6.6934, grad_fn=)
change in grad:tensor(1.1683) change in w:tensor(6.6177, grad_fn=)
change in grad:tensor(-1.1337) change in w:tensor(6.5009, grad_fn=)
change in grad:tensor(2.6782) change in w:tensor(6.6143, grad_fn=)
change in grad:tensor(-0.1474) change in w:tensor(6.3464, grad_fn=)
change in grad:tensor(1.3872) change in w:tensor(6.3612, grad_fn=)
change in grad:tensor(-0.1733) change in w:tensor(6.2225, grad_fn=)
change in grad:tensor(0.6742) change in w:tensor(6.2398, grad_fn=)
change in grad:tensor(2.2316) change in w:tensor(6.1724, grad_fn=)
change in grad:tensor(0.2642) change in w:tensor(5.9492, grad_fn=)
change in grad:tensor(1.1551) change in w:tensor(5.9228, grad_fn=)
change in grad:tensor(-0.2221) change in w:tensor(5.8073, grad_fn=)
change in grad:tensor(0.8829) change in w:tensor(5.8295, grad_fn=)
change in grad:tensor(-0.6571) change in w:tensor(5.7412, grad_fn=)
change in grad:tensor(0.2562) change in w:tensor(5.8069, grad_fn=)
change in grad:tensor(-0.1034) change in w:tensor(5.7813, grad_fn=)
change in grad:tensor(0.3308) change in w:tensor(5.7916, grad_fn=)
change in grad:tensor(0.9183) change in w:tensor(5.7586, grad_fn=)
change in grad:tensor(0.2254) change in w:tensor(5.6667, grad_fn=)
change in grad:tensor(-0.8814) change in w:tensor(5.6442, grad_fn=)
change in grad:tensor(1.3082) change in w:tensor(5.7323, grad_fn=)
change in grad:tensor(0.9839) change in w:tensor(5.6015, grad_fn=)
change in grad:tensor(-0.9530) change in w:tensor(5.5031, grad_fn=)
change in grad:tensor(0.1812) change in w:tensor(5.5984, grad_fn=)
change in grad:tensor(0.4798) change in w:tensor(5.5803, grad_fn=)
change in grad:tensor(0.5709) change in w:tensor(5.5323, grad_fn=)
change in grad:tensor(-0.0080) change in w:tensor(5.4752, grad_fn=)
change in grad:tensor(0.8253) change in w:tensor(5.4760, grad_fn=)
change in grad:tensor(0.1289) change in w:tensor(5.3935, grad_fn=)
change in grad:tensor(2.1646) change in w:tensor(5.3806, grad_fn=)
change in grad:tensor(0.7722) change in w:tensor(5.1642, grad_fn=)
change in grad:tensor(0.4386) change in w:tensor(5.0869, grad_fn=)
change in grad:tensor(0.0652) change in w:tensor(5.0431, grad_fn=)
change in grad:tensor(0.9074) change in w:tensor(5.0366, grad_fn=)
change in grad:tensor(0.6684) change in w:tensor(4.9458, grad_fn=)
change in grad:tensor(0.2032) change in w:tensor(4.8790, grad_fn=)
change in grad:tensor(-0.5990) change in w:tensor(4.8587, grad_fn=)
change in grad:tensor(0.3370) change in w:tensor(4.9186, grad_fn=)
change in grad:tensor(-0.3383) change in w:tensor(4.8849, grad_fn=)
change in grad:tensor(1.3079) change in w:tensor(4.9187, grad_fn=)
change in grad:tensor(-0.5136) change in w:tensor(4.7879, grad_fn=)
It has really helps me to constantly check the shapes of tensors while designing, especially in pure PyTorch.` x,y = next(dl)` will get you a sample batch to check. Remember that with x@w, the inside dimensions touching the @ must match and that they both cancel out and disappear in the result.