# Use PyTorch loss function for MNSIT

Hello,

Happy new year all!
I am new to Fastai and I posting a question for the first time.

I was going through Lesson 4 of the part 1 (2020). I see that in the beginning there are 2 loss functions that are already defined, L1 and L2 and it is already defined in PyTorch. My question is why cant use the same as the MNIST loss function. What is the need to define a new loss function?

I tried to use the loss function defined in Pytorch on the example given in the chapter -

``````trgts  = tensor([1,0,1])
prds   = tensor([0.9, 0.4, 0.2])
F.mse_loss(prds,trgts).sqrt()
``````

and I see the loss to be 0.5196 which is not terribly bad than what was calculated with the other loss function - 0.4333. Then why not use the existing loss function?

Can someone clarify?

Thanks

@achhem
welcome to the community.
MSE is used for regression, ie, when we have the task to predict a number as close as possible as the target. But here, we’re dealing with classification, or in other words - a prediction is either right or wrong. There’s no “close to”. For example, in MNIST, there are 10 digits. And suppose you were to use MSE as your loss function. Then According to this,when the ground truth is 9, a prediction of an 8 is better than a prediction 5. Which is not true. It is either right or wrong. An 8 is as wrong as is a 5, in this case.
Hope this makes it clearer

1 Like

Thanks, that clarifies.