Take each test sample's metric individually

I have trained a U-Net learner and I test it using a N samples test set. I use various metrics, e.g., foreground_accuracy, dice_multi, etc. I would like to ask if there is an existing function which can return the individual foreground_accuracies of each sample in the test set instead of an overall quantity.

E.g. Input [N, features] → Output → [N, foreground_accuracy]
What is already built-in is Input [N, features] → Output → [Overall foreground_accuracy]

1 Like

I suspect you might have to use a Callback for that, and where the training loop calculates the metrics, store them in a list too instead of calculating the average. Check on lesson19 I think where Callbacks are taught.

That would be the first thing I would try since what you are aiming to do is to change how the training loop functions (I could be wrong too, so try and see if it works)

Thank you jimmiemunyi. I did it with a custom implementation. I load the trained model, iterate the test set and calculate my metric for each individual sample. Then extract my statistic.

1 Like

Nice. Glad you could solve it!