Can someone please explain the “binary_loss” in the last couple of cells of lesson 1?
def binary_loss(y, p): return np.mean(-(y * np.log(p) + (1-y)*np.log(1-p)))
- Why is it the “binary” loss? Does that just mean that it is normalized between 0 and 1?
What is the “
y” here? It is not the
log_preds,y = learn.TTA(), because it crashes if I use that “
I see that that when a list is made to call this function, the variable is called “
acts”, what does this stand for? I realise that it is
y, but I don’t understand.
How would I get y from the confusion matrix? I got inf when I tried using
probs- so maybe what is
Also, one really dumb question, but I just want to verify I understand - what does “precompute” actually precompute when set to True?