I am trying to use fastai v2 to train an unet segmentation model on vehicle parts data set of around 100 images. The images were taken for particular views of the car, where mostly a smaller areas of the parts are included in one image. I am not sure why the accuracy is that bad (Dice>3). Should the images include a larger view of the car? Should I add more images? OR…?
Your training and validation loss are still decreasing, so the model can still improve. This could be the reason why your results don’t look that good.
Regarding the Dice loss, I think you need to use DiceLossMulti as you have multiple labels.