Binary Unet Segmentation

I am trying to train Scene text using U-Net architecture as shown in camvid lesson.

I am getting my training and validation loss as 0 after some time. I am not getting any results also. Please let me know if anyone have suggestions.

Training Images: 5603
Mask Image: 5603
Image Size: 256X256

Original Image

Mask Image

Notebook example:

The notebook you linked doesn’t have 0 training loss and validation loss. Can you show us your the rest of your code? I’m not qualified to answer your question but I’m trying to help you get an answer quicker. Read How to ask for help

I am following the same code. Just changed the dataset folder.

1 Like

When you run the code on the camvid dataset do you get decent result? If so check your dataset. In the images shown the mask is all black-is that just a remdering issue of the pic-do you see correct mask in fastai show method?

Hi Adrian,

I have used the correct mask. If you open it through an image processing library. You will find only 1 and 0 pixels, as it is a binary classification problem. I have uploaded my notebook please check into this.

This dataset is from ICDAR 2019

Looks all OK until fit_one_cycle() can you try fir_one_cycle with the default pct_start of 0.3 and see if that improves the result

No any improvement. Is it because the model is not finding any constant features across the dataset?

Does it run OK if you use the camvid dataset?

How many images do you have? - if not many, instead of get_transforms() try adding a bunch of different transforms to boost your data sample ± get more images.

1 Like

You are doing open_mask(get_y_fn(img_f)) to show mask in jupyter notebook but you then use return open_mask(fn, div=True) in the actual dataset. The div=True does division by 255, so if your mask is already in 0-1 format which might be the case, you are then making it all 0. And that probably makes sense because I don’t notice the masks in show_batch

1 Like

Thanks. I have done the mistake here by creating a mask of 0 and 1 pixels. The problem got solved.
Thanks once again

I will try this one also. It will defintely boost my accuracy.
Thanks adrian

hi. i am just starting off segmentation. i just want to know how heavy were your saved models? I am trying a similar task but my resnet50 model, without any fine-tuning, is occupying 3.2GB which is quite a lot in comparison to the 200-300Mb models i saved for classification tasks.

I understand that segmentation is kind of a per-pixel classification and thus is more resource-intensive so does that mean segmentation models would be 10 times more heavier in terms of space as is my case?

Thank you!

did you actually submit it in kaggle? i am having issues in adding the test files and it would be of great help if you could guide me to do the same.