I am playing around with the object detection and wondering how to do that in the fast.ai V1. Are there notebook in the fastai_docs that I can look at ? I found in the vision.models there is Yolo3
Darknet architecture, which is the base of Yolo v3
So in the new version of fast.ai course, we will use yolo3 to detect object rather than SSD in the last version ? How can I get information about that (notebook used for developing yolo3) ?
class RetinaNet(nn.Module):
“Implements RetinaNet from [1708.02002] Focal Loss for Dense Object Detection”
def init(self, encoder:nn.Module, n_classes, final_bias=0., chs=256, n_anchors=9, flatten=True):
@m000ritz, @hkristen Can you make Object Detection work with fast.ai v1 ? There are no more ObjectDetectDataset in fastai library . Are there any example of this in fast. ai v1, no matter Retina or the previous used technique SSD ?
For the fastai coco subset I was able to create the data object with the datablocks api like this:
PATH = Path('.../coco_sample')
ANNOT_PATH = 'annotations'
images, lbl_bbox = get_annotations(PATH / ANNOT_PATH / 'train_sample.json')
img2bbox = dict(zip(images, lbl_bbox))
get_y_func = lambda o:img2bbox[o.name]
data = (ObjectItemList.from_folder(PATH / 'train_sample')
#Where are the images?
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.label_from_func(get_y_func)
#How to find the labels? -> use get_y_func
.transform(get_transforms(), tfm_y=True, padding_mode='zeros', do_crop=False, size=128,)
#Data augmentation? -> Standard transforms with tfm_y=True
.databunch(bs=16, collate_fn=bb_pad_collate))
#Finally we convert to a DataBunch and we use bb_pad_collate
Then I am stuck at the above described error. Ithink I`ll need more time or maybe help from @sgugger to resolve this
Since the creation of the notebook model_size changed its behaviour. You can now use: sfs_szs = model_sizes(encoder, size=imsize) hooks = hook_outputs(encoder)
instead. That should work for you.
Thanks @hkristen . So I think play with object detection in fast.ai now is quite difficult because it misses example and the library is still changing. I switch to read last year course with SSD and hope Jeremy will tell something about it in the last session of part 1 v3
This creates a Dynamic SSD based on the number of grid cells, zoom levels and aspect ratios for the anchor boxes.
We haven’t trained the network fully in the dev notebook… like unfreezing and fine-tuning, and using Focal Loss (which is supported) etc, but are getting good results already. Hope you find it useful!
So the code in fastai v1 for object detection will not be ready until part 2? I see already in the docs the yolov3 architecture… It is not yet 100% ready?