Show predicted mask image- fastai V1

Hi,
I have trained a model for segmentation, and my datasets were built with SegmentationDataset.
After fitting the model, I get the predicted tensors for the masks with :

preds, y = learner.get_preds()
predicted_masks = np.argmax(preds, axis = 1)

Now, I would like to take the predicted masks (which are tensors of integers, each one a label for an object) and visualize them as images (as ImageMask or Image) to have some sort of visual comparison with ground truth.

How can I do this ? Is there any way I can easily show them with the same colors as the input masks?

So far, I’m converting the tensors to arrays, and then with a dictionary of label:color (with colors manually chosen) plotting each pixel. I didn’t find how to get that dictionary of label:color directly from my input png mask files, and I guess it would already be an improvement to have it!

Thank you very much for your help!

2 Likes

If you want to visualize the mask like the inputs, I’d suggest creating an ImageMask with your predictions by doing pred_mask = ImageMask(predicted_masks[idx]) (it should take a numpy array or a tensor). Then you can type pred_mask.show() to see the mask.
To see it with the corresponding input, I think you need data.valid_ds[idx].show(y = pred_mask).

3 Likes

Thank you that is exactly what I was looking for !

Actually, when trying data.valid_ds[idx].show(y = pred_mask)I realized that because the transformations I apply to my datatset are random (rand_padding) , I get a different image crop for every call of data.valid_ds[idx], so the mask I get from the predictions and the image do not match, even when setting random seeds. (I set np.random.seed, random.seed torch.manual_seed and torch.cuda.manual_seed_all)

Would you know how to get back to the image from which the mask was predicted ?

FOUND a solution : I’m using learner.pred_batch instead of learner.get_preds

There’s no TTA for segmentation tasks if that’s what you’re asking, so on the validation set, your transform should be deterministic.

Hello, i am newbie in DL and fastai. Try to create segmented images with the help of unet_learner.
My code below:
#get_transforms without randomness
def preprocess(do_flip:bool=True, flip_vert:bool=False, max_rotate:float=10., max_zoom:float=1.1,
max_lighting:float=0.2, max_warp:float=0.2, p_affine:float=0.75,
p_lighting:float=0.75, xtra_tfms:Optional[Collection[Transform]]=None)->Collection[Transform]:
“Utility func to easily create a list of flip, rotate, zoom, warp, lighting transforms.”
res = [rand_crop()]
if do_flip: res.append(dihedral_affine() if flip_vert else flip_affine(p=0.5))
if max_warp: res.append(symmetric_warp(magnitude=(-max_warp,max_warp), p=p_affine))
if max_rotate: res.append(rotate(degrees=(-max_rotate,max_rotate), p=p_affine))
if max_zoom>1: res.append(rand_zoom(scale=(1.,max_zoom), p=p_affine))
if max_lighting:
res.append(brightness(change=(0.5*(1-max_lighting), 0.5*(1+max_lighting)), p=p_lighting))
res.append(contrast(scale=(1-max_lighting, 1/(1-max_lighting)), p=p_lighting))
# train , valid
return (res + listify(xtra_tfms), [crop_pad(is_random=False)])

data = (SegmentationItemList
.from_folder(path_img)
.random_split_by_pct(valid_pct=0.2)
.label_from_func(get_y_fn, classes=[0,1])
.transform(preprocess(), size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats))
wd=1e-2
learn = unet_learner(data, models.resnet34, wd=wd,
metrics=metrics
)
lr=3e-3
learn.fit_one_cycle(5, slice(lr), pct_start=0.9)

data_test=(SegmentationItemList.from_folder(path_train_pred))
#create mask
learn.predict(data_test[0])[0]
#see initial image
data_test[0].apply_tfms(preprocess(), size=size)

As a result i receive mask of some part of initial image and the same part of the image. How can i get entire (not only part) masked initial image? As i understand, i need to get rid of transforms on test set (except resizing transforms), but can’t figure out, how to do it.