I created my own metric (F1) and want to get it into the ordinary metrics list. Is this possible? I want to observe my metric in the ordinary progress bar and also use other callbacks that rely on metrics. For instance, I struggled to combining my callback with the save best callback. I came up with this hacky solution. Is there a proper way how to do it?
Thank you,
Johannes
@dataclass
class MyCallback(Callback):
learn:Learner
name:str='bestmodel'
y_pred, y_true = [], []
best = None
def on_batch_end(self, last_output, last_target, **kwargs):
_, idxs = torch.max(last_output, 1)
self.y_pred += idxs.tolist()
self.y_true += last_target.tolist()
def on_epoch_end(self, **kwargs):
f1_macro = sklearn.metrics.f1_score(self.y_pred, self.y_true, average='macro')
# currently, I am saving the F1 scores in a global array
f1s.append(f1_macro)
# save best
if self.best is None or f1_macro > self.best:
self.best = f1_macro
self.learn.save(f'{self.name}')
# Should I manipulate the learner directly?
if len(self.learn.recorder.metrics) == 0:
self.learn.recorder.metrics.append([])
self.learn.recorder.metrics[0].append(f1_macro)
if 'f1' not in self.learn.recorder.names:
self.learn.recorder.names.append('f1')
def on_train_end(self, **kwargs):
self.learn.load(f'{self.name}')
learn.callbacks += [MyCallback(learn)]
Hi there! You should look at the docs on metrics because every metric is a callback behind a scene. You just have to reset your inner variables at every on_epoch_begin, update them on every on_batch_end then store the final result in self.metric.
Hi! May I ask what’s the last_output argument in the on_batch_end callback when it comes to a classifier? I know it’s prediction results but I notice they don’t add up to 1.
I tried to print(last_output[0].sigmoid()) and the results still don’t add up to 1. They were slightly off.
last_output is the output of the model, but don’t forget that the softmax (not sigmoid) is inside the loss function. If you apply softmax you should have things that add up to 1 (when n=2, sigmoid and softmax are very similar but not equals)
Thanks for the explanation! I’ve been debugging my custom metric for hours…you just save my day.
But now this got me wonder, why doesn’t softmax appear in the model as a non-linear layer but instead be part of the loss function? I assume that’s generally how pytoch models work? What’s the thinking behind this design?
Is so good that I would love to see it as it’s own section in the “Tutorials”, named something like “Create your own Callback”. I’m hoping that this will break my barrier to understanding how to make my own callback, to tell me the last batch number before my GPU crashes, destroying all data with it
hey @fitler I was also trying to use F1 as a metric but as a newbie I am having a hard time following the explanation: could you provide the solution line? Maybe then I will understand what you mean. Thanks a lot in advance!
The documentation on metrics no longer has a section for creating custom metrics based on the Metric class. Can you please describe how we can create custom metrics by sub-classing the Metric class?