FBeta formula for averages references non existing metric

FBeta class is a grandchild of ConfusionMatrix:
(family situation: ConfusionMatrix < CMScores < FBeta)

Both ConfusionMatrix and FBeta override an “on_epoch_end” callback, which, in case of an FBeta instance, wipes out ConfusionMatrix take on “on_epoch_end”.

FBeta tries to use self.metric in its version of “on_epoch_end”, but the only place in the hierarchy where self.metric is created is ConfusionMatrix's “on_epoch_end”, which gets ignored in this case.

Hence an “FBeta does not have a metric attribute” error.

It is easier to see it in the code than to word it in English:

class ConfusionMatrix(Callback):
    ...
    def on_epoch_end(self, **kwargs):
        self.metric = self.cm         # <<< this never gets created / called
                                      #     in case of FBeta instance

# ...
class CMScores(ConfusionMatrix):
# ...
class FBeta(CMScores):
    # ...
    def on_epoch_end(self, last_metrics, **kwargs):
        # ...
        # throws here since "self.metric" was not created anywhere
        if self.avg: metric = (self._weights(avg=self.avg) * self.metric).sum()  # <<<< 
        return add_metrics(last_metrics, metric)

I was going to submit a PR, but then I am not exactly sure what should go instead of self.metric in:

if self.avg: metric = (self._weights(avg=self.avg) * self.metric).sum()

since I tried running with self.cm which is visible, but the fbeta score I get makes no sense

Why do you want a self.metric? This has been deprecated to use the add_metrics function at the end.

it’s not that I want self.metric, it’s that FBeta(average='macro'), with the current code fails at the end of the epoch: i.e. this line:

if self.avg: metric = (self._weights(avg=self.avg) * self.metric).sum()

gets called since self.avg is set, and self.metric is not defined. So this line needs to change.

I am just curious what it needs to change to in order to still preserve the correct fbeta score.

Oh, that’s a bug. The self needs to be removed, just pushed a fix.

ok, that’s what I though.

I did try to just use metric, but was thrown off by fbeta being returned as nan in this case.

I just looked a little closer and I see why. Whenever the precision does not go over all the classes it returns nans in some of it’s elements, hence the end * and sum() result in the overall nan.

This happens in the beginning of training, once precision saw all the classes fbeta has a number to show.

I submitted a PR to do: metric[metric != metric] = 0 before multiplying it by weights. It ensures "nan"s are converted to "0"s and the resulting fbeta is never “nan