NLLLoss was mentioned in couple lectures, but the implementation was never really explained.
Pytorch implementation leads to C -code:
I’m having really hard time grasping this concept. I think the implementation was skipped during lessons, but if I’m wrong about this, I would be really grateful for a link pointing to the video!
Also the wiki only mentions the mathematical formula for multi class log loss, but does not include anything about Python code:
Hi. I also had a hard time grasping this, especially because of confusion between CrossEntropyLoss and NLLLoss. Not sure if implementation of negative log likelihood loss was ever explained in courses. In short - CrossEntropyLoss = LogSoftmax + NLLLoss. Here is a quick example with NLLLoss implemenation:
NLL loss also supports ‘reduce’ parameter which is equal to True by default. In this case it would be something like this:
def NLLLoss(logs, targets, reduce=True):
out = torch.zeros_like(targets, dtype=torch.float)
for i in range(len(targets)):
out[i] = logs[i][targets[i]]
return -(out.sum()/len(out) if reduce else out)