Difference between cost function, error function and loss function?

Hello all,

Can someone explain to me what the actual difference between cost function, loss function, and error function is?

From what I understand, an error function gives the difference between the actual value and the predicted value.

The loss function is the function we try to optimize to reduce the error.

So, what is the cost function? Can we describe it as the same as the loss function or is it closer to the error function?

Some people say the cost function is a function over the loss function, like some sort of aggregation like MSE or SSE, but I was under the idea that MSE and SSE were the loss functions, that try to show bad error is.

Also, when people say the Sum of Squared Error, is it reasonable to understand that loss as a function of error? One article on the internet say that we get error from the loss function but my understanding was that you compute loss from the error.

Please let me know if my understanding is correct, and if not please provide an explanation for what the three functions mean.


As far as I know, these three terms all mean the exact same thing.