Can anyone help me with an example on how to create databunch for object detection with fast.ai-V1 ? I am aware that this will be explained in part -2, but I am asking it out of enthusiaism . Please let me know if anyone has any idea.
The databunch is really just a wrapper around train/valid/test dataloaders. The bits that are different for image segmentation are the datasets which have to include not just image/category but also bbox and possibly mask. A dataset is just an abstract class so you can define getitem to return anything you want.
I have implementation of maskrcnn which uses fastai. Probably could be better e.g. did not use the data block api. And the architecture of maskrcnn is crazy complex. Nevertheless may help.
I want more detailed example using datablock api.
You can find all kind of examples in docs.fast.ai https://docs.fast.ai/data_block.html .There include the Object Detection like below:
data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco and its subfolders .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func on the file name of the data .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms; also transform the label images .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch, use a batch size of 16, # and we use bb_pad_collate to collate the data into a mini-batch
this is fine but i want something like how to make a databunch of images and their bounding box values .
See my post here: Object detection databunch