Question related to metrics

Hi fastai practitioners ! I have a question related to metrics and I wonder if someone had a similar requirements. I am not sure that this post is in the right section, so sorry in advance if this is not the case

I am working on a classification problem with objects being labelled across 17 numerical classes (class ‘1’, class ‘2’, class ‘3’ … class ‘17’)
I am looking for a metric that would provide me the “absolute error within a range”, ie if we are able to predict either the class or the class just before or just after. So for example

for class ‘2’ objects, I have 50 predictions being ‘2’, 25 being ‘1’ , 20 being ‘3’ and 5 being ‘4’
The metric I am looking for is that I call ‘Mean absolute error within a range’ will be :

(50 x absolute((2 - 2)) + 25 x absolute((1 -2)) + 20 x absolute((3 - 2)) + 5 x absolute((2 - 4)) ) / 100 = 0.55

Do you know if one of the existing metric in fastai would be able to help me here or should I create this new metric from scratch ?

ok so finally i figured this out creating a new class inheriting from CMScores

#class to compute the accuracy within a range , typically +.25 or -.25 for our case
#use confusion matrix to get diagonal + diagonal just below + diagonal just above
class AccuracyRange(CMScores):
    "Computes the accuracy within a range, ie either the exact pred, or one class below or one above"
    def on_train_begin(self, **kwargs):
        self._range = 1
        self.n_classes = 0

    def on_epoch_end(self, last_metrics, **kwargs):
        precision_range = self._acc_range()
        return add_metrics(last_metrics, precision_range)

    def _acc_range(self):
        sums =
        prec = (torch.diag( + torch.diag(,diagonal=self._range).sum() + \
                torch.diag(,diagonal=-self._range).sum()) / sums.sum()
        return prec