Converting tensor prediction back to visual mask (fastai: reconstruct_output)

Hi all, In order to better understand the library and results I get, I like to try and break down some of the key functions & rebuild them where possible. Right now, I am stuck on the last part of the function show_results for ImageSegment problems (like camvid) where it calls reconstruct_output.

Essentially this function takes the final tensor of predictions (argmaxed etc) and, as it inherits from ImageSegment, it is called on the ImageSegment dependent variable (y) which has the (auto-fastai-magikcal ;-)) ability to be seen as a mask. I would like to understand how to do that conversion without relying on the library to convert and ‘show’ it … Hope the image below helps to explain:

Essentially - for camvid example, if I have the tensor of predicted codes for an image how can I then show this as an image?

2 Likes

I have the same question :thinking:

Sorry for the late answer…

import cv2
preds = learn.get_preds()
predicted_masks = np.argmax(preds[0],axis=1)
for i in range(len(predicted_masks)):
Mask = (to_np(predicted_masks[i]))*50
plt.imshow(Mask)

The mask will be in gray-scale

1 Like