Why your foreground_acc is returning nan

Here is a potential reason why a foreground accuracy would return nan in the metrics. If you have a dataset that has certain items that are fully background, this might cause the foreground_acc to show 100% background values. This could happen if a full batch is made up of only full background images because the full batch would end up with 0/0 non-background pixels matching. Here is a piece of code that can help identify this case:

for x,y in learn.dls.valid:
    out = learn.model(x)
    print(foreground_acc(out, y))

If you had a larger dataset, this could be used to stop on one of those batches instead.

The resolution for me here was to increase my batch size, but if that isn’t an option, you might be able to deal with the problem by more intentionally generating batches.

1 Like

Thanks @KevinB !! I had exactly the same problem and couldn’t figure out why…your post made my day :heart_eyes:
thanks a lot!

1 Like