Yolo v5 implementation in fastai2

I have been working on a project to detect fire in camera imagery at https://github.com/robmarkcole/fire-detection-from-images and I have implemented an object detection model using a fine tuned Yolo v5 architecture, and done some basic optimisation and experimentation with image augmentation. With a few hours effort I achieved an mAP@.5 of 0.657, Precision of 0.6, Recall of 0.7, trained on 1155 images (337 base images + augmentation). However the workflow I have used is quite time consuming (training notebook, roboflow for image augmentation, manually recording results) and I want to make use of fastai2 to speed up the process of iterating over model parameters and image augmentation, and possibly use wandb to keep track of experiment results. Since Yolo v5 is implemented in pytorch this should be possible, but I am hoping someone knows of a decent article or example on how I could go about doing this? There are a couple of relevant threads on this forum but they are a couple of years out of date now.



As far as I know, object detection isn’t officially built into fastai2 yet.

I think it’s on the roadmap (there are some hints that it’s coming, eg the get_annotations function exists) but I haven’t seen anything newer in the official docs or repo though.

Looks like someone is working on it though; this looks like this notebook has a partially complete RetinaNet implementation in fastai2 (referenced via this thread): https://github.com/muellerzr/Practical-Deep-Learning-for-Coders-2.0/blob/master/Computer%20Vision/06_Object_Detection.ipynb


OD in fastai2 is definitely not where it should be, and Jeremy is working on a roadmap. That being said @lgvaz and @farid have a wonderful IceVision library attempting to bridge the gap (successfully). I’d recommend starting there


Thanks for the advice @yeldarb & @muellerzr, I will pursue IceVision