But my mask values are in between 0 to 1 already and I have checked also.
See, the model is also trained by resnet because when I tried to predict mask (label-output) for one of my test images, it is giving me the result also.
But I think the problem is during plotting, it can not get values proper or according to indexes that’s why it gives me an error. I am not sure about it, but I felt it might be one reason.
when I run learn.recorder.plot(), it gives me the blank plot for learning_rate v/s loss.
I also tried to increase batch size but still, it gives me the error.
See the white squares? They are 255.
Can you upload the picture of your folder that shows all the masked images?
Or you divded the 255 pixie to be 1 before inputting to the model?
the mask images are normalized via normalize() in below code,
data = (src.transform(get_transforms(), tfm_y=True)
.databunch(bs=4)
.normalize())
May be I gave labels incorrectly!
I give you my procedure below:
-> See I have images and for each image its mask (1-salt, 0-not salt).
-> I have to do image segmentation. I have two folders: 1- images (contais all images in our casse input -image size is 3 x 101 x 101) 2- masks(contains images of masks for each corresponding input image. mask size is 1 x 101 x 101).
-> src = (SegmentationItemList.from_folder(image_path)
.split_subsets(train_size=0.8, valid_size=0.2)
.label_from_???())
the above code will prepare list of images(from image_path-input images) then it split them in to train and val (80/20).
-> Now my doubt is how do I label each of my input image according to its mask which is in form of images in mask folder?
previously I did it label_from_func(get_y_fn)
where,
although I have download the dataset, I will try the solve your problem by the end of this month if i have free time.
However, I suggest you to solve it by yourself because all you need to do is processing you image from 255 to 1 or 0. However, I am also new to fastai that the I just started to use it like 1 to 2 months ago. i think there will be a better person comes here to help you.
Hi @khushi810 - I’d highly recommend you change to fastai v2 if you are doing binary segmentation.
I did it with v1 but I had to do some subclassing etc to get it to work.
In v2, things are much cleaner - no subclassing etc.
The other issue @JonathanSum pointed out is if your masks are [0,255] for [background, salt], then fastai won’t work well (or at all).
Fastai (either version) wants contiguous values for the codes ala 0,1, 2, 3, etc.
a start of 0, and then jump to 255, won’t go well.
For v2, thanks to @muellerzr code, you can remap the values quickly in the get_y function from 255 to 1:
for binary, it can be as quick as
mask[mask==255]=1 in your get y,
or you could just load each one in a script, change the values and save back out and be done with having to intercept.
Also, you can use things like (colormap=“Blues”, vmin=0, vmask=1) in your show_results function to highlight the masks as that was another thing that made a huge difference as you can then see the generated masks automatically. (see the notebook above).
Hi @jeremy and @sgugger I’m working on segmentation with Fastai and I want to use my custom head to do it. I don’t want to use unet for learning purpose. I am also using BCEWithLogitsLoss.
The dataset is CAMVID and it has 32 labels, i checked my mask’s values range from 0-31 never the less I get Target 21 out of bounds on CPU and Device-side assert triggered on GPU
i try to use even the new Fastai v2 but to no avail(note: Fastai cnn_learner wants data.c and train_dl from Datablock which it doesn’t happen in v1).