IceVision: An Agnostic Object Detection Framework

We just released IceVision version 0.4.0. We now supports RetinaNet. We are sharing a Colab notebook that shows how to train a RetinaNet model using the fastai latest version.

RetinaNet shares some code similarities with Faster-RCNN: you can switch between the 2 models just by replacing faster_rcnn by retinanet. We also support all compatible backbones.

RetinaNet Notebook:
Backbones for Faster-RCNN, Mask-RCNN, Retina:

What is IceVision?

  • IceVision is an Object-Detection Framework that connects to different libraries/frameworks such as Fastai, Pytorch Lightning, and Pytorch with more to come.

  • Features a Unified Data API with out-of-the-box support for common annotation formats (COCO, VOC, etc.)

  • Supports EfficientDet (Ross Wightman’s implementation), Faster-RCNN, Mask-RCNN, and RetinaNet (Torchvision implementations)

  • Features unique to IceVision such as auto-detecting and auto-fixing of invalid data.

  • The IceData repo hosts community maintained parsers and custom datasets

  • Provides flexible model implementations with pluggable backbones

  • Helps researchers reproduce, replicate, and go beyond published models

  • Enables practitioners to get moving with object detection technology quickly



If you have any questions, you can contact us here:


Thanks a lot for this framework! It is I think this is the first object detection framework that includes all the benefits of fastai, so great work!

The good news: I tested the efficientdet notebook on a custom dataset I have, and it worked perfectly!

The bad news: I wasn’t able to make the FixedDataSplitter work. Do you have a working example that can be used in order to understand how to best use the FixedDataSplitters?


[EDIT] I finally managed to use it. I was trying to split according to the file names with extensions (file.png), whereas it seems like the library expects names without the extension (‘file’ instead of ‘file.png’). Still think having one or two examples there would be useful :slight_smile:.


Thanks a lot for the feedback @sebderhy, I’m happy to know you’re able to train efficientdet on your custom dataset.

We are working a lot on improving the documentation and will make sure to include some examples on how to use FixedDataSplitter. For now, just to clarify, what you have to pass to it is whatever you returning for imageid on your parser.

If you have more questions or would like to share your work with the community (we would love that) our discord forum would be a good place to do that =)