IndexError: Target 63 is out of bounds while using unet for kaggle dataset

Hello all,

I am using dataset given by TGS salt identification challenge - Kaggle (https://www.kaggle.com/c/tgs-salt-identification-challenge/datadataset.

I am new to fastai. I am running this on GCP.

My train data is images and output is the mask for those images. I am using below code

src = (SegmentationItemList.from_folder(image_path)
.split_subsets(train_size=0.8, valid_size=0.2)
.label_from_func(get_y_fn, classes=codes))
#codes = [‘salt’, ‘not_salt’] labels
learn = unet_learner(data, models.resnet34, metrics=metrics, wd=wd)

above lines of code running perfectly, but when I execute the line
lr_find(learn)

It gives me an error like: IndexError: Target 101 is out of bounds.
each and every run time it changes the target.

Can anyone please me help me out here how to solve this kind of error and what is the reason behind it

Thank you!

I think I know what the problem is. It looks like the masked label is not from 0 to 1. It is 0, 255 or more.
Take a look here:


You can see green 4.
But when it is your image, it will become either 255 or 0.

I highly suggest to know what the range of the pixel value of the masked image.

Thank you for your suggestions @JonathanSum.

But my mask values are in between 0 to 1 already and I have checked also.
See, the model is also trained by resnet because when I tried to predict mask (label-output) for one of my test images, it is giving me the result also.

But I think the problem is during plotting, it can not get values proper or according to indexes that’s why it gives me an error. I am not sure about it, but I felt it might be one reason.

when I run learn.recorder.plot(), it gives me the blank plot for learning_rate v/s loss.

I also tried to increase batch size but still, it gives me the error.

I don’t know how to resolve it.

For the masked image, I checked it.


See the white squares? They are 255.
Can you upload the picture of your folder that shows all the masked images?
Or you divded the 255 pixie to be 1 before inputting to the model?

Hi @JonathanSum !

the mask images are normalized via normalize() in below code,

data = (src.transform(get_transforms(), tfm_y=True)
.databunch(bs=4)
.normalize())

May be I gave labels incorrectly!
I give you my procedure below:

-> See I have images and for each image its mask (1-salt, 0-not salt).
-> I have to do image segmentation. I have two folders: 1- images (contais all images in our casse input -image size is 3 x 101 x 101) 2- masks(contains images of masks for each corresponding input image. mask size is 1 x 101 x 101).

-> src = (SegmentationItemList.from_folder(image_path)
.split_subsets(train_size=0.8, valid_size=0.2)
.label_from_???())
the above code will prepare list of images(from image_path-input images) then it split them in to train and val (80/20).

-> Now my doubt is how do I label each of my input image according to its mask which is in form of images in mask folder?

previously I did it label_from_func(get_y_fn)
where,

def get_y_fn(x): return Path(str(x.parent))/x.name
codes = array([‘1’, ‘0’])

I did it from tutorial/lecture example of image segmentation. But may be I am not sure it is right.

Because in my case I have to pass mask image as label.

I also tried

mask_labels = (SegmentationLabelList.from_folder(mask_path)
.split_subsets(train_size=0.8, valid_size=0.2))

It will generate ItemLists

But I don’t know how to give labels. How to solve this.

I hope now you can get my point.

although I have download the dataset, I will try the solve your problem by the end of this month if i have free time.
However, I suggest you to solve it by yourself because all you need to do is processing you image from 255 to 1 or 0. However, I am also new to fastai that the I just started to use it like 1 to 2 months ago. i think there will be a better person comes here to help you.

@JonathanSum, Thank you!

Yeah I will try to debug it, though I got some understandings label_from_func and able to train the model.
Still got errors in plotings.

By the way thank you.

Hi @khushi810 - I’d highly recommend you change to fastai v2 if you are doing binary segmentation.
I did it with v1 but I had to do some subclassing etc to get it to work.
In v2, things are much cleaner - no subclassing etc.

The other issue @JonathanSum pointed out is if your masks are [0,255] for [background, salt], then fastai won’t work well (or at all).
Fastai (either version) wants contiguous values for the codes ala 0,1, 2, 3, etc.
a start of 0, and then jump to 255, won’t go well.

For v2, thanks to @muellerzr code, you can remap the values quickly in the get_y function from 255 to 1:

for binary, it can be as quick as
mask[mask==255]=1 in your get y,
or you could just load each one in a script, change the values and save back out and be done with having to intercept.

Also, you can use things like (colormap=“Blues”, vmin=0, vmask=1) in your show_results function to highlight the masks as that was another thing that made a huge difference as you can then see the generated masks automatically. (see the notebook above).

Hope that helps!

1 Like

Hi @jeremy and @sgugger I’m working on segmentation with Fastai and I want to use my custom head to do it. I don’t want to use unet for learning purpose. I am also using BCEWithLogitsLoss.
The dataset is CAMVID and it has 32 labels, i checked my mask’s values range from 0-31 never the less I get Target 21 out of bounds on CPU and Device-side assert triggered on GPU
i try to use even the new Fastai v2 but to no avail(note: Fastai cnn_learner wants data.c and train_dl from Datablock which it doesn’t happen in v1).

flatten_channel = Lambda(lambda x: x[:,0])

class StdUpsample(nn.Module):
def __init__(self, nin, nout):
    super().__init__()
    self.conv = nn.ConvTranspose2d(nin, nout, 2, stride=2)
    self.bn = nn.BatchNorm2d(nout)

def forward(self, x): return self.bn(F.relu(self.conv(x)))

simple_up = nn.Sequential(
    nn.ReLU(),
    StdUpsample(512, 256),
    StdUpsample(256, 256),
    StdUpsample(256, 256),
    StdUpsample(256, 256),
    nn.ConvTranspose2d(256, 1, 2, stride=2) # (bs, 1,  128, 128)
 
    )

crit = nn.BCEWithLogitsLoss
def BCE_loss(input, target):
       return crit(flatten_channel(input.float()), flatten_channel(target.float()))

learner = cnn_learner(data, models.resnet34, custom_head=simple_up)
learner.opt_fn = Adam
learner.loss_fn =  BCE_loss
learner.metrics = [accuracy]