I’m doing a binary segmentation with fastai’s default U-net working flow, and I’m facing this phenomenon:
lower loss doesn’t always come with better metric results (like accuracy, dice, jaccard)
Not only in this specific segmentation problem but also in others have I found the same thing. Does this imply that the chosen loss function isn’t good enough (in a general problem)?
Also, fastai’s default loss function for segmentation is
FlattenedLoss of CrossEntropyLoss() and many works have claimed that there have been other loss functions that work better in this kind of problem, like Dice loss, Focal loss… Are there any reasons that we don’t include those as options instead of the default one?
Thanks for reading.