NaNs in metric callbacks cause mean over minibatches to become nan

I am trying to implement sensitivity and specificity metric callbacks. However, when a mini-batch doesn’t contain any example of one of the classes, the corresponding metric (say, sensitivity) is undefined, so I return torch.tensor(float(‘nan’)) in those cases, hoping the averaging process will ignore the NaNs (I can’t return 0 as zeros underestimate arithmetic mean). But it seems that’s not the case - the entire validation set’s sensitivity becomes nan whenever there is a single nan returned for any of the mini-batch’s sensitivity. Has anyone on this forum tried incorporating the sensitivity metric in fastai and succeeded? Seeking advice.