Hello, currently watching lesson 2, and Jeremy is asked what the difference is between loss and error. He mentions that some error metrics will not change at all if you change the parameters, however, the loss will always change if you adjust parameters. I don’t understand this fully. Could anyone give an example of error metrics that would be unaffected by adjusting parameters?
Metrics = human-comprehensible understanding of model performance (accuracy, error_rate, etc)
Loss = the internal loss function the model uses to adjust its weights.
In some cases there is overlap when the loss can be interpreted well, such as MSE and RMSE.
So when you have a metric (or define it in fastai) it never touches the models weights or gradients, whereas the loss function will
Does this help some?
Take the example of the 3’s vs 7’s classification. For simplicity, assume, that the classifier is w0 + w1x1 + w2x2 (assume there are only 2 pixels).
Now, our classifier is such that if w0+w1x1+w2x2 is greater than 0, the model predicts that the number is a three, and if it is negative, it is a seven. Here, w0,w1, and w2 are parameters.
Suppose, some input results in the value of this expression as 100.
Now if I were to decrease the value of w0 by just 1, the resulting expression would be 99, which will still give the same prediction (that it is a three). So now you see, that even if the parameter value has changed, the predictions are still the same, and hence the accuracy(or any other error metric) is also the same. (accuracy basically means how many classes were correctly identified).
Ofcourse this does not mean that change in parameters will never result in change in the error metric. Infact, that is the goal - to change the parameters such that we get the best value for the metric.
If you train well enough, the parameters will change such that the error metric is the best (For example, getting the HIGHEST accuracy)
Yes it does. Thank you. Although we could have some error metrics that have a relationship with the weights and from which we can derive some gradients correct?
Thank you! Makes a lot of sense.
All metrics in fastai are done on the validation set, so there are no gradients available