Object detection databunch

#1

I am trying to use the new fastai library for object detection and I am having problem while creating the databunch.
I have a csv file having image names and bounding box coordinates like this(image_name,bbox_coordinates)
tree.png 78 446 83 422
I tried to do something like this
src = (ImageItemList.from_csv(path, ‘bb.csv’, folder=‘images’, suffix=’’)
.random_split_by_pct(0.2)
.label_from_df(label_delim=’ '))
data =(src.transform(get_transforms(), tfm_y=True, size=(120,160))
.databunch(bs=16, collate_fn=bb_pad_collate)
.normalize(imagenet_stats))
But getting an error like this:

Exception Traceback (most recent call last)
/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/data_block.py in _check_kwargs(ds, tfms, **kwargs)
537 x = ds[0]
–> 538 try: x.apply_tfms(tfms, **kwargs)
539 except Exception as e:

/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/core.py in apply_tfms(self, tfms, **kwargs)
156 “Subclass this method if you want to apply data augmentation with tfms to this ItemBase.”
–> 157 if tfms: raise Exception(f"Not implemented: you can’t apply transforms to this type of item ({self.class.name})")
158 return self

Exception: Not implemented: you can’t apply transforms to this type of item (MultiCategory)

During handling of the above exception, another exception occurred:

Exception Traceback (most recent call last)
in
1 print(src)
----> 2 data =(src.transform(get_transforms(), tfm_y=True, size=(120,160))
3 .databunch(bs=16, collate_fn=bb_pad_collate)
4 .normalize(imagenet_stats))

/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/data_block.py in transform(self, tfms, **kwargs)
457 if not tfms: tfms=(None,None)
458 assert is_listy(tfms) and len(tfms) == 2, “Please pass a list of two lists of transforms (train and valid).”
–> 459 self.train.transform(tfms[0], **kwargs)
460 self.valid.transform(tfms[1], **kwargs)
461 if self.test: self.test.transform(tfms[1], **kwargs)

/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/data_block.py in transform(self, tfms, tfm_y, **kwargs)
663 _check_kwargs(self.x, tfms, **kwargs)
664 if tfm_y is None: tfm_y = self.tfm_y
–> 665 if tfm_y: _check_kwargs(self.y, tfms, **kwargs)
666 self.tfms,self.tfmargs = tfms,kwargs
667 self.tfm_y,self.tfms_y,self.tfmargs_y = tfm_y,tfms,kwargs

/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/data_block.py in _check_kwargs(ds, tfms, **kwargs)
538 try: x.apply_tfms(tfms, **kwargs)
539 except Exception as e:
–> 540 raise Exception(f"It’s not possible to apply those transforms to your dataset:\n {e}")
541
542 class LabelList(Dataset):

Exception: It’s not possible to apply those transforms to your dataset:
Not implemented: you can’t apply transforms to this type of item (MultiCategory)

If someone can tell how to create the databunch for object detection

0 Likes

(Yash) #2

hi @sanketg , Did you solved this issue?

0 Likes

#3

No, still facing the the issue.

0 Likes

(Akshai Rajendran) #4

I’ve been trying to reimplement the object detection lesson from last year using Francesco Pochetti’s very thorough blog post as a guide. I’ve gotten the data bunch working and can train the model as well but am getting some weird output. For the data I created a df like this:

    image	                                            bboxes	                classes
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[294, 115, 49, 448]]	[137]
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[432, 91, 42, 330]]	[137]
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[224, 230, 104, 414]]	[137]
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[387, 46, 82, 464]]	[137]
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[335, 103, 66, 331]]	[137]

Where bboxes is a list of lists of coordinates and classes is a list of the class of each bbox. I then created a column with a flag for training set and validation set, and concatenated the training dataframe and validation dataframe. The following code acts on the concatenated frame, df_concat.

x = [list(x) for x in zip(df_concat.bboxes, df_concat.classes)]
img2bbox = dict(zip(df_concat.image, x))
get_y_func = lambda o:img2bbox[o.replace(str(path) + "/", "")]

sz = 224
bs = 64

tfms = get_transforms(do_flip=True, flip_vert=False)
data = (ObjectItemList.from_df(df_concat, path=path)
        .split_from_df(col=3)
        .label_from_func(get_y_func)
        .transform(tfms, tfm_y=True, size=(sz, sz))
        .databunch(bs=bs, collate_fn=bb_pad_collate)
        .normalize(imagenet_stats)
       )

This should create a useable databunch. I had a very odd issue that I discovered during training where some classes in the validation dataset were set to None despite being identical to other classes that were parsed properly. I tried digging into the code but could not figure out where this issue came from. I used the following hack to get around it.

#hack to fixed inexplicable bug in data loader where valid_ds categories are None despite being valid
for i,valid_y in enumerate(data.valid_ds.y.items):
    if valid_y[1][0] == None:
        data.valid_ds.y.items[i] = get_y_func(data.valid_ds.x.items[i])

Hope this helps with creating your databunch! I’m still working on the model but will post a link to the notebook once I finish.

0 Likes

Databunch for Object detection
(Tom) #5

I have been doing something similar for object detection and wondered if anyone else had performance issues for generating the databunch. I have 110,000 images, and it takes over an hour to create the databunch. Is this to be expected? When I was using fastai v0.7, I could create a dataloader for 300,000 images in a few minutes.

Here is my code:

data = (ObjectItemList.from_df(path=IMG_PATH, df=trn_fns_df)
    .random_split_by_pct()                          
    .label_from_func(get_y_func2)
    .transform(get_transforms(max_rotate=5, max_zoom=1.05), tfm_y=True, size=size, resize_method=ResizeMethod.SQUISH)
    .databunch(bs=64, collate_fn=bb_pad_collate)
    )

    data = data.normalize()
0 Likes

(Tom) #6

It seems that the bottleneck is when the labels are added, whether using the from_func or from_df…

0 Likes