The notebook you linked doesn’t have 0 training loss and validation loss. Can you show us your the rest of your code? I’m not qualified to answer your question but I’m trying to help you get an answer quicker. Read How to ask for help
When you run the code on the camvid dataset do you get decent result? If so check your dataset. In the images shown the mask is all black-is that just a remdering issue of the pic-do you see correct mask in fastai show method?
I have used the correct mask. If you open it through an image processing library. You will find only 1 and 0 pixels, as it is a binary classification problem. I have uploaded my notebook please check into this.
How many images do you have? - if not many, instead of get_transforms() try adding a bunch of different transforms to boost your data sample ± get more images.
You are doing open_mask(get_y_fn(img_f)) to show mask in jupyter notebook but you then use return open_mask(fn, div=True) in the actual dataset. The div=True does division by 255, so if your mask is already in 0-1 format which might be the case, you are then making it all 0. And that probably makes sense because I don’t notice the masks in show_batch
hi. i am just starting off segmentation. i just want to know how heavy were your saved models? I am trying a similar task but my resnet50 model, without any fine-tuning, is occupying 3.2GB which is quite a lot in comparison to the 200-300Mb models i saved for classification tasks.
I understand that segmentation is kind of a per-pixel classification and thus is more resource-intensive so does that mean segmentation models would be 10 times more heavier in terms of space as is my case?