Test dataset with bounding box

I am working with RSNA kaggle dataset. I am trying to follow class 8 and 9 notebooks. At first I tried to build model with patientIds who has target =1 and with largest bounding boxes. This way, I ensured that each image has at least one bounding box and there are only 4 outputs for each input. Results were awful but still able to use learn.predict(is_test=True) on Kaggle’s dataset.

After making dataset “md” when I checked validation dataset and test dataset. I found that test dataset .

md.val_ds[0][1]

  • array([ 46., 39., 169., 105.], dtype=float32)

&

md.test_ds[0][1]

-> array([0., 0., 0., 0.], dtype=float32)

So far so good!!! I could follow all steps and could obtain predictions on test dataset.

But when I made cvs file with all bounding boxes for given project id in “bbox” column. I found that test dataset was not able to handle it properly. Here is an example with two bounding boxes in one project id (some projectIds have one and some have three bounding boxes).

md.val_ds[0][1]

-> array([ 46., 39., 169., 105., 55., 150., 180., 205.], dtype=float32)

However,

md.test_ds[0][1]

-> Following output

->---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-94-47fe584d9349> in <module>()
----> 1 md.test_ds[0][1]

~/fastai/courses/dl2/fastai/dataset.py in __getitem__(self, idx)
    201             xs,ys = zip(*[self.get1item(i) for i in range(*idx.indices(self.n))])
    202             return np.stack(xs),ys
--> 203         return self.get1item(idx)
    204 
    205     def __len__(self): return self.n

~/fastai/courses/dl2/fastai/dataset.py in get1item(self, idx)
    195     def get1item(self, idx):
    196         x,y = self.get_x(idx),self.get_y(idx)
--> 197         return self.get(self.transform, x, y)
    198 
    199     def __getitem__(self, idx):

~/fastai/courses/dl2/fastai/dataset.py in get(self, tfm, x, y)
    206 
    207     def get(self, tfm, x, y):
--> 208         return (x,y) if tfm is None else tfm(x,y)
    209 
    210     @abstractmethod

~/fastai/courses/dl2/fastai/transforms.py in __call__(self, im, y)
    646         self.tfms.append(ChannelOrder(tfm_y))
    647 
--> 648     def __call__(self, im, y=None): return compose(im, y, self.tfms)
    649     def __repr__(self): return str(self.tfms)
    650 

~/fastai/courses/dl2/fastai/transforms.py in compose(im, y, fns)
    621     for fn in fns:
    622         #pdb.set_trace()
--> 623         im, y =fn(im, y)
    624     return im if y is None else (im, y)
    625 

~/fastai/courses/dl2/fastai/transforms.py in __call__(self, x, y)
    233         x,y = ((self.transform(x),y) if self.tfm_y==TfmType.NO
    234                 else self.transform(x,y) if self.tfm_y in (TfmType.PIXEL, TfmType.CLASS)
--> 235                 else self.transform_coord(x,y))
    236         return x, y
    237 

~/fastai/courses/dl2/fastai/transforms.py in transform_coord(self, x, ys)
    264     def transform_coord(self, x, ys):
    265         yp = partition(ys, 4)
--> 266         y2 = [self.map_y(y,x) for y in yp]
    267         x = self.do_transform(x, False)
    268         return x, np.concatenate(y2)

~/fastai/courses/dl2/fastai/transforms.py in <listcomp>(.0)
    264     def transform_coord(self, x, ys):
    265         yp = partition(ys, 4)
--> 266         y2 = [self.map_y(y,x) for y in yp]
    267         x = self.do_transform(x, False)
    268         return x, np.concatenate(y2)

~/fastai/courses/dl2/fastai/transforms.py in map_y(self, y0, x)
    258 
    259     def map_y(self, y0, x):
--> 260         y = CoordTransform.make_square(y0, x)
    261         y_tr = self.do_transform(y, True)
    262         return to_bb(y_tr)

~/fastai/courses/dl2/fastai/transforms.py in make_square(y, x)
    254         y1 = np.zeros((r, c))
    255         y = y.astype(np.int)
--> 256         y1[y[0]:y[2], y[1]:y[3]] = 1.
    257         return y1
    258 

IndexError: index 2 is out of bounds for axis 0 with size 1

I am suspecting that test Dataset cannot be initialized properly when when training/validation dataset have variable (one, two , three or more) bounding boxes/outputs.

Because of this issue, I can’t use

learn.predict(is_test= True)

Anyone has similar issue?

Maybe just add some bounding boxes at [0,0,0,0] for all the test images as a work around?

Thanks Kevin!

I have tried something on similar line after reading you suggestion. I just used training images with two bounding boxes.
So test dataset was initialized properly.
It has reduced my training set significantly but that error went away.

Thank you !

That is a great dataset to learn how to use medical imaging data.

It’s good that you solved your initial issue. Localization (single instance) algorithm with a single box is very interesting and useful to understand the regression learning concept of a bounding box.

Ultimately, you’ll want to experiment object detection architectures with multi-instance bounding boxes. Another way to approach this problem is with a semantic segmentation architecture with post-processing to divide the grouped pixels into different instances.

Thanks Alexandre!

I am quite new to deep learning . I am working on multi-instance bounding boxes. From your message, so far looks like I am on correct path.

Can you please point towards some literature about semantic segmentation architecture (is it same as topics covered in last lecture numbered 14) ?

For the competition, I think it is fine to keep the focus on object detection architecture.

If you are interested, Unet (https://arxiv.org/pdf/1505.04597.pdf) and SegNet (https://arxiv.org/pdf/1511.00561.pdf) are amongst others popular incarnations of semantic segmentation networks.

There is also a public kernel using a simple FCN that is basically doing semantic segmentation: https://www.kaggle.com/jonnedtc/cnn-segmentation-connected-components

Thank you for encouragement and links to literature!

Appreciate your suggestions.

I am facing similar issue here with same error returned when training dataset have more than one bounding boxes. Tamhash, did you get the error on training images with 4 bounding boxes?

Hi Kareenteo,

Apologies for late reply.
I managed to made it work by setting all target to 0,0,0… for nan and non-existing values.

If you want to consider all images with 4 bounding boxes then
a. Image with 4 bounding boxes will have {x, x, x, x}
b. Image with 3 bounding boxes will have {0x, x, x, x}
c. Image with 2 bounding boxes will have {0x, 0x, x,x }
d. Image with 1 bounding box will have {0x, 0x, 0x, x}
e. Image with no bounding box will have { 0x, 0x, 0x, 0x}

as target. where x is {x_min, y_min, x_max, y_max} and 0x is {0,0,0,0}
I am not sure if this is the best way to handle nan values and multiple/variable number of bounding boxes

This setup seems to work but lack of time, I could not optimize results.
I hope this helps.

I’m facing the same issue. With help of debuger I found out that size of y is 1 and I also found that
learn.data.test_ds.y.shape is (12815,1) whereas learn.data.trn_ds.y.shape is (11000,4).

I tried to create to initialize y of test_ds by adding this, learn.data.test_ds.y = np.zeros([12815,4])
Still I get the same error!

Kindly help me out!

Thank you.

Hi @teja_1,

Its been long time.

Also, I am suspecting that you are using 0.7 Fastai.
I am using 1.0 which have completely revamped data block API and I am using it for a while.
Can recall exact details.
I will try to Fastai1.0 and get back to you soon.

Thank you tamhash!

Yes, I’m using 0.7 Fastai. Kindly help me at the earliest.

Hi, I am facing the same problem that you faced and solved, but I am not understanding how you did it, can you share your notebook?

hi @kakods, I wish could help but unfortunately I lost all my notebooks when I upgraded 1.0 fastai.

1 Like

No problem. I solved it by creating a new model data whose validation set consists of the test set