Object detection databunch

I am trying to use the new fastai library for object detection and I am having problem while creating the databunch.
I have a csv file having image names and bounding box coordinates like this(image_name,bbox_coordinates)
tree.png 78 446 83 422
I tried to do something like this
src = (ImageItemList.from_csv(path, ‘bb.csv’, folder=‘images’, suffix=’’)
.random_split_by_pct(0.2)
.label_from_df(label_delim=’ '))
data =(src.transform(get_transforms(), tfm_y=True, size=(120,160))
.databunch(bs=16, collate_fn=bb_pad_collate)
.normalize(imagenet_stats))
But getting an error like this:

Exception Traceback (most recent call last)
/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/data_block.py in _check_kwargs(ds, tfms, **kwargs)
537 x = ds[0]
–> 538 try: x.apply_tfms(tfms, **kwargs)
539 except Exception as e:

/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/core.py in apply_tfms(self, tfms, **kwargs)
156 “Subclass this method if you want to apply data augmentation with tfms to this ItemBase.”
–> 157 if tfms: raise Exception(f"Not implemented: you can’t apply transforms to this type of item ({self.class.name})")
158 return self

Exception: Not implemented: you can’t apply transforms to this type of item (MultiCategory)

During handling of the above exception, another exception occurred:

Exception Traceback (most recent call last)
in
1 print(src)
----> 2 data =(src.transform(get_transforms(), tfm_y=True, size=(120,160))
3 .databunch(bs=16, collate_fn=bb_pad_collate)
4 .normalize(imagenet_stats))

/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/data_block.py in transform(self, tfms, **kwargs)
457 if not tfms: tfms=(None,None)
458 assert is_listy(tfms) and len(tfms) == 2, “Please pass a list of two lists of transforms (train and valid).”
–> 459 self.train.transform(tfms[0], **kwargs)
460 self.valid.transform(tfms[1], **kwargs)
461 if self.test: self.test.transform(tfms[1], **kwargs)

/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/data_block.py in transform(self, tfms, tfm_y, **kwargs)
663 _check_kwargs(self.x, tfms, **kwargs)
664 if tfm_y is None: tfm_y = self.tfm_y
–> 665 if tfm_y: _check_kwargs(self.y, tfms, **kwargs)
666 self.tfms,self.tfmargs = tfms,kwargs
667 self.tfm_y,self.tfms_y,self.tfmargs_y = tfm_y,tfms,kwargs

/new_data/gpu/sanketg/anaconda3/envs/pytorch1/lib/python3.6/site-packages/fastai/data_block.py in _check_kwargs(ds, tfms, **kwargs)
538 try: x.apply_tfms(tfms, **kwargs)
539 except Exception as e:
–> 540 raise Exception(f"It’s not possible to apply those transforms to your dataset:\n {e}")
541
542 class LabelList(Dataset):

Exception: It’s not possible to apply those transforms to your dataset:
Not implemented: you can’t apply transforms to this type of item (MultiCategory)

If someone can tell how to create the databunch for object detection

hi @sanketg , Did you solved this issue?

No, still facing the the issue.

I’ve been trying to reimplement the object detection lesson from last year using Francesco Pochetti’s very thorough blog post as a guide. I’ve gotten the data bunch working and can train the model as well but am getting some weird output. For the data I created a df like this:

    image	                                            bboxes	                classes
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[294, 115, 49, 448]]	[137]
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[432, 91, 42, 330]]	[137]
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[224, 230, 104, 414]]	[137]
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[387, 46, 82, 464]]	[137]
    ILSVRC/Data/CLS-LOC/train/n02017213/n02017213_...	[[335, 103, 66, 331]]	[137]

Where bboxes is a list of lists of coordinates and classes is a list of the class of each bbox. I then created a column with a flag for training set and validation set, and concatenated the training dataframe and validation dataframe. The following code acts on the concatenated frame, df_concat.

x = [list(x) for x in zip(df_concat.bboxes, df_concat.classes)]
img2bbox = dict(zip(df_concat.image, x))
get_y_func = lambda o:img2bbox[o.replace(str(path) + "/", "")]

sz = 224
bs = 64

tfms = get_transforms(do_flip=True, flip_vert=False)
data = (ObjectItemList.from_df(df_concat, path=path)
        .split_from_df(col=3)
        .label_from_func(get_y_func)
        .transform(tfms, tfm_y=True, size=(sz, sz))
        .databunch(bs=bs, collate_fn=bb_pad_collate)
        .normalize(imagenet_stats)
       )

This should create a useable databunch. I had a very odd issue that I discovered during training where some classes in the validation dataset were set to None despite being identical to other classes that were parsed properly. I tried digging into the code but could not figure out where this issue came from. I used the following hack to get around it.

#hack to fixed inexplicable bug in data loader where valid_ds categories are None despite being valid
for i,valid_y in enumerate(data.valid_ds.y.items):
    if valid_y[1][0] == None:
        data.valid_ds.y.items[i] = get_y_func(data.valid_ds.x.items[i])

Hope this helps with creating your databunch! I’m still working on the model but will post a link to the notebook once I finish.

I have been doing something similar for object detection and wondered if anyone else had performance issues for generating the databunch. I have 110,000 images, and it takes over an hour to create the databunch. Is this to be expected? When I was using fastai v0.7, I could create a dataloader for 300,000 images in a few minutes.

Here is my code:

data = (ObjectItemList.from_df(path=IMG_PATH, df=trn_fns_df)
    .random_split_by_pct()                          
    .label_from_func(get_y_func2)
    .transform(get_transforms(max_rotate=5, max_zoom=1.05), tfm_y=True, size=size, resize_method=ResizeMethod.SQUISH)
    .databunch(bs=64, collate_fn=bb_pad_collate)
    )

    data = data.normalize()

It seems that the bottleneck is when the labels are added, whether using the from_func or from_df…

hi aranjendran
I face strange error
I use this code generate the list of bbox and class for each item.

Where each bbox is of is list form [x1,y1,x1max,y1max],[x2,y2,x2max,y2max],
so x[0:2 ] is [[[nan, nan, nan, nan], 0], [[nan, nan, nan, nan], 0]]

       x = [list(x) for x in zip(train_df.bbox, train_df.Target)]`
             img2bbox = dict(zip(train_df.patientId, x))
             get_y_func = lambda o:img2bbox[  o[o.rfind('/')+1:]]

below is the error trail when i try to create the label. I am badly stuck here can u please help

data = data.label_from_func(get_y_func)
4 #data.train.x display x items

/opt/conda/lib/python3.6/site-packages/fastai/data_block.py in _inner(*args, **kwargs)
466 self.valid = fv(*args, from_item_lists=True, **kwargs)
467 self.class = LabelLists
–> 468 self.process()
469 return self
470 return _inner

/opt/conda/lib/python3.6/site-packages/fastai/data_block.py in process(self)
520 “Process the inner datasets.”
521 xp,yp = self.get_processors()
–> 522 for ds,n in zip(self.lists, [‘train’,‘valid’,‘test’]): ds.process(xp, yp, name=n)
523 #progress_bar clear the outputs so in some case warnings issued during processing disappear.
524 for ds in self.lists:

/opt/conda/lib/python3.6/site-packages/fastai/data_block.py in process(self, xp, yp, name)
683 def process(self, xp:PreProcessor=None, yp:PreProcessor=None, name:str=None):
684 “Launch the processing on self.x and self.y with xp and yp.”
–> 685 self.y.process(yp)
686 if getattr(self.y, ‘filter_missing_y’, False):
687 filt = array([o is None for o in self.y.items])

/opt/conda/lib/python3.6/site-packages/fastai/data_block.py in process(self, processor)
73 if processor is not None: self.processor = processor
74 self.processor = listify(self.processor)
—> 75 for p in self.processor: p.process(self)
76 return self
77

/opt/conda/lib/python3.6/site-packages/fastai/vision/data.py in process(self, ds)
329 def process(self, ds:ItemList):
330 ds.pad_idx = self.pad_idx
–> 331 super().process(ds)
332
333 def process_one(self,item): return [item[0], [self.c2i.get(o,None) for o in item[1]]]

/opt/conda/lib/python3.6/site-packages/fastai/data_block.py in process(self, ds)
334
335 def process(self, ds):
–> 336 if self.classes is None: self.create_classes(self.generate_classes(ds.items))
337 ds.classes = self.classes
338 ds.c2i = self.c2i

/opt/conda/lib/python3.6/site-packages/fastai/vision/data.py in generate_classes(self, items)
335 def generate_classes(self, items):
336 “Generate classes from unique items and add background.”
–> 337 classes = super().generate_classes([o[1] for o in items])
338 classes = [‘background’] + list(classes)
339 return classes

/opt/conda/lib/python3.6/site-packages/fastai/data_block.py in generate_classes(self, items)
389 “Generate classes from items by taking the sorted unique values.”
390 classes = set()
–> 391 for c in items: classes = classes.union(set©)
392 classes = list(classes)
393 classes.sort()

TypeError: ‘int’ object is not iterable

What i think possible source of error is in line 329 process(ds…)
ds is empty
so it only initializes first member pad_idx=0
which is an integer nd hence generate class this integer is non iterable.
I m unable to determine why is it Empty .

I think the issue may be with the exact structure of your list x. In my example I zip together a list of bbox coordinates and a list of classes, i.e. [[294, 115, 49, 448]] and [137] which would yield [[[[294, 115, 49, 448]], [137]], …]. It’s a little confusing since there are so many layers of lists but it seems to me that you may be missing a layer since your class is not wrapped in its own list.

Thanks yes right … i noticed it i was not passing list of lists which code is meant to handle in .
I needed some more help in understanding Grid and anchors

  1. How are they trying to setup grid as shown below. Just a high level overview which will help in understanding the code

    def create_grid(size):
    “Create a grid of a given size.”
    H, W = size if is_tuple(size) else (size,size)
    grid = FloatTensor(H, W, 2)
    linear_points = torch.linspace(-1+1/W, 1-1/W, W) if W > 1 else tensor([0.])
    grid[:, :, 1] = torch.ger(torch.ones(H), linear_points).expand_as(grid[:, :, 0])
    linear_points = torch.linspace(-1+1/H, 1-1/H, H) if H > 1 else tensor([0.])
    grid[:, :, 0] = torch.ger(linear_points, torch.ones(W)).expand_as(grid[:, :, 1])
    return grid.view(-1,2)

  2. What purpose does the scales serve when we define the anchors

    def create_anchors(sizes, ratios, scales, flatten=True):
    “Create anchor of sizes, ratios and scales.”
    aspects = [[[s*math.sqrt®, s*math.sqrt(1/r)] for s in scales] for r in ratios]
    aspects = torch.tensor(aspects).view(-1,2)
    anchors = []
    for h,w in sizes:
    #4 here to have the anchors overlap.
    sized_aspects = 4 * (aspects * torch.tensor([2/h,2/w])).unsqueeze(0)
    base_grid = create_grid((h,w)).unsqueeze(1)
    n,a = base_grid.size(0),aspects.size(0)
    ancs = torch.cat([base_grid.expand(n,a,2), sized_aspects.expand(n,a,2)],
    anchors.append(ancs.view(h,w,a,4))
    return torch.cat([anc.view(-1,4) for anc in anchors],0) if flatten else anchors

  3. why do we multiply height by root ratio and divide by root ratio the width

I’m having the same issue. Has anyone solved it yet? The replies don’t seem to work for me :confused: