Sorry, I meant metric, not loss. Also I hadn’t understood that we have to sum all true positives/false positives other all the pictures of the validation set before applying the formula for AP/mAP so the code above won’t work.
I’m not quite sure I understand the definition of false positives though, and searching on the net has made me even more confused. When we have five predictions with the right class for ground object number one, I gather it’s one TP and four FP, but how do we count a prediction for a ground truth object that doesn’t give the right class?
For instance, let’s say I have a ground truth object of class one for which I have 5 predictions, 3 of class one, 2 of class two. How many false positives does that give? 4?