Lesson 3: Avoid training on 'void' pixels?

On the exercise of segmentation with camvid dataset, some of the pixels on the solution have category of ‘void’, which don’t count while evaluating the accuracy.

If they mean nothing … Shouldn’t we avoid training the network on them?

The network will try to identify that ‘void’ category (after all, void is a category as any other) but that effort is meaningless and could potentially create a struggle for the network as it will try to resolve something that has no solution.

Would that avoided work help the network?

If so, how can we do it?

I tried to “see” where are those void pixels, and can understand why they are there (unimportant o difficult to categorize areas due to the distance or mix of materials).

# Extract the void pixels
# Because 
# mask.data.size() #-> torch.Size([1, 720, 960])
nvoid2.size()  # ->  torch.Size([720, 960])

Here is a visual example

import matplotlib.pyplot as plt
plt.imshow(nvoid2.numpy(), cmap='gray')
mask.show(figsize=(5,5), alpha=1)

1 Like