SaveModelCallback saves correct model but after fit holds first metric, not monitor, value as self.best

Hello there.
I am training a model using CrossEntropyFlat as my loss function (with weights passed in, as the dataset is unbalanced) for a 3-nary multi-class problem.

Using both SaveModelCallback and EarlyStoppingCallback as callbacks, at the end of training the former is reporting 0.6073619723320007 as the last validation loss in self.best (as defined in its TrackerCallback parent class), before EarlyStoppingCallback reports No improvement since epoch 10: early stopping and then another call I make to learn.validate() reports it as 0.8122507929801941,

This is how I am instantiating the cbs when creating the learner:

cbs = [ SaveModelCallback(fname=fname), EarlyStoppingCallback(patience=patience) ]

Am i doing something wrong, or is the CancelFitException in EarlyStoppingCallback maybe not dealing well with the last model saved by SaveModelCallback?

In the meantime, I will be looking around the v2 source, and comparing it to v1. But if anyone can help I would be grateful (the hot weather is kinda unforgiving lately).

Found the culprit: I was modelling a change to SaveModelCallback in order to print only the last improvement in a ipywidgets.Output().

Turns out that after fit it will report accuracy (which is the first metric listed in metrics) as self.best (even if monitor param is left untouched, as ‘valid_loss’, during callback init) and the last epoch as self.epoch.

Will update to see if there is anything interesting to report.