Running a unet_learner
with fastai v2.5.3 to perform segmentation, I noticed that when running learn.get_preds(with_decoded=True)
, the decoded results do not match the argmax of the predicted probabilities.
To demonstrate this, I made a copy of @muellerzr 's Binary Segmentation notebook. You can see the output here.
As seen in the notebook, at about 6% of pixels, torch.argmax
yields a different result from the decoded values. Additionally, the decodes
output looks much more like the target mask than the argmax
result (see below).
The loss function is just FlattenedLoss of CrossEntropyLoss()
and its decodes
function is just def decodes(self, x): return x.argmax(dim=self.axis)
. In other words, it’s not clear to me that the decodes
approach is supposed to be behaving differently from just taking the argmax of the predictions.
I suspect I’m missing something obvious, but figured I’d raise the question here.
Target mask:
Decodes mask:
Argmax mask: