I’m observing that the suggested use of partial functions for metrics leads to misleading results, e.g. in lesson3-planet nb:
acc_02 = partial(accuracy_thresh, thresh=0.2)
f_score = partial(fbeta, thresh=0.2)
learn = create_cnn(data, arch, metrics=[acc_02, f_score])
epoch train_loss valid_loss accuracy_thresh fbeta
the metrics column names are misleading, because these are not the metrics functions that were used (the defaults are different).
There must be a better way to have the used metrics match the names displayed in the header of the results.
The relevant code is:
def on_train_begin(self, epochs:int, pbar:PBar, metrics:MetricFuncList)->None:
"About to start learning."
self.state_dict = _get_init_state()
self.state_dict['n_epochs'],self.state_dict['pbar'],self.state_dict['metrics'] = epochs,pbar,metrics
names = [(met.name if hasattr(met, 'name') else camel2snake(met.__class__.__name__)) for met in self.metrics]
self('train_begin', metrics_names=names)
I see we already have AverageMetric
class, so this could be now fixed with a hack:
acc_02 = AverageMetric(partial(accuracy_thresh, thresh=0.2))
acc_02.name = "acc_02"
learn = create_cnn(data, arch, metrics=[acc_02])
now, the metric header is displayed correctly.
epoch train_loss valid_loss acc_02
But perhaps we can add a new wrapper class?
acc_02 = MakeMetric(partial(accuracy_thresh, thresh=0.2), "acc_02")
learn = create_cnn(data, arch, metrics=[acc_02])
I also researched partial()
and it’s possible to write a wrapper around partial
to inject a name, say under partial_func.__name__
but it won’t be the same as normal functions which also have __class__.__name__
set and this can’t be set in the partial function. So probably, this is not a good approach.