In v1 we could define custom loss metrics that were printed while fitting by passing LossMetrics as a callback and defining self.metrics in your loss function.

How can I achived the same functionality in v2? I see we have a class called AvgMetric but I’m not really sure on how to use it.

It looks like AvgMetric is just a wrapper. For instance, this is the source for exp_rmspe which looks exactly the same as before IIRC

def _exp_rmspe(inp,targ):
    inp,targ = torch.exp(inp),torch.exp(targ)
    return torch.sqrt(((targ - inp)/targ).pow(2).mean())
exp_rmspe = AccumMetric(_exp_rmspe)
1 Like

Thanks for the response Mueller, but It’s still not clear on how to implement this into a loss function. For making things more concrete what I want to achieve is something similar to what we did in lesson 7 superres. Take a look at the defined loss function here.

Basically I defined my loss function (as a nn.Module) and then whenever forward is called I want to store some variables and show then in the printing table while fitting. Additionally I would also like to be able to plot them after fit is done. In v1 we could call learn.recorder.plot_metrics()

We don’t have a way to have loss functions define metrics yet. Just add to metrics in the usual way, for now (which does mean some dupe code and compute, unless you do some nifty caching).

We do plan to add this, however.

1 Like

I would like to work on this, any pointers to where to start?