So why is the printed value (table output) for FBeta different from the actual results, when it’s correct for the “builtin” things like accuracy and validation loss? I got a bit lost in the code trying to follow what gets printed where
My fast.ai library version is 1.0.52-1 (should be tha latest as of this writing).
Upon further investigation, it seems that the values are incorrect once the average parameter is something other than micro. No parameter defaults to micro.
My guess would be that very something wrong in the callbacks calls in validate, which screws up the metric somehow. I’ll look into it when I have a bit of time.