marcmuc
(Marc P. Rostock)
November 25, 2018, 7:21pm
58
+1! I have the same problem everytime I deviate from the exact lesson notebooks (which is basically always ). Training works but the metrics don’t. I have also never gotten the f1 metric to work, so I seem to lack a fundamental understanding here. It would be extremely helpful if someone who fully understands the reasons for these errors and the mechanics of the metrics callbacks could explain this. I think other people struggle with this as well. (Examples mentioned here:
Thank you @sgugger . This fixed the problem. Now I have another one:
learn.fit_one_cycle(1, 3e-2, moms=(0.8,0.7))
It gives an error at the validation stage - the default metrics accuracy is not appropriate may be:
~/anaconda2/envs/fastai.1/lib/python3.6/site-packages/fastai/metrics.py in accuracy(input, targs)
37 input = input.argmax(dim=-1).view(n,-1)
38 targs = targs.view(n,-1)
---> 39 return (input==targs).float().mean()
40
41 def error_rate(input:Tensor, ta…
Maybe someone can help me here: I simply can’t get the f_scores to work with my own data, no matter whether images or other. It works fine in the planets notebook, but even if I copy the exact same stuff over, I always get “mismatch” complaints, that I don’t understand. Training works fine, lr find works fine, metrics like accuracy and/or Precision() get displayed nicely, but as soon as I add any fbeta based metric, it stops working:
this is the specific code right now:
f1_fai = partial(fbeta,…
and
here:
Yes!. I got strange loss numbers on validation while training but ended up with what seemed like good numbers though I don’t have the experience at this point to judge if the loss numbers i was getting were good…
As for the metrics I probably passed in None or possibly didn’t even execute that exact line.
As for col vs cols maybe it worked for me because the first column was the image name and the second was the target…I’ll take a look and report back.
2 Likes