I’m looking at the basic sgd
sheet in graddesc.xlsm
.
In cell F4 (err^2), the error function is defined by (y-y prediction)^2
.
Writing this out as a formula, this would give (y-(b+ax))^2
.
Taking the derivative with respect to b would then give, -2(ax+b-y)
instead of 2(ax+b-y)
(which is what’s in the worksheet). This would flip the sign of the de/db
value.
I guess (y-(b+ax))^2
can be converted into ((b+ax)-y)^2
by just multiplying by -1, but I’m just wondering, was there any reason for this? Or was it just to make the formula easier to differentiate?
Apologies if I made some mistake with my math
Edit: Maybe to simplify my question – it seems that the error function uses ([actual y]-[predicted y])^2
, but when taking the derivative of the function, it’s assumed that the formula is ([predicted y]-[actual y])^2
. I know that the order of subtraction doesn’t matter for the actual error result since we’re taking the square, but it does seem to affect the sign of one of the partial derivatives, so I’m wondering why this discrepancy exists.