Any object detection models implemented with fastai?
That is a good question. An implementation of YoloV3 or Retinanet would be handy
On a recent meetup in Munich a pytorch developer from facebook did answer to my q&a question if Detectron from Facebook would be ported to pytorch. His answer was yes and it should be released in the comming weeks.
The current documentation of detectron concerning transfer learning unfortunately is kind of nonexistent.
Hi, there are many detection algorithms in pytorch:
to name a few.
However, I was hoping to exploit fast AI best practice features on these models
is there a tutorial on how to implement object detection or segmentation with the new fast.ai library?
The only reference for that is in the course, and is not complete.
Detection is very important in real world applications, and it is weird that is was almost completely ignored in the documentation and tutorials.
The modules of fast.ai seems to be very customized to specific applications, and not easily adopted to new tasks. Maybe some official documentation should be provided, showing possibilities to play with the model architecture, multi-task learning, detection, custom loss functions etc. beyond the regular image/text classification tasks.
This is very easy to do in keras and pytorch where the models and data are not entangled inside a learner. what am I missing here?
make the library friendly to more experienced users and not only to beginners …
There is an official documentation, there is also a function to help create a model suitable for segmentation called
unet_learner. Object detection is on our list of features to be implemented and it will be done in the next few months, there is a prototype of retina net here. There is a full example of training a segmentation model in this tutorial and how to use it for inference in this one.
The modules are so “heavily customized” there are starter kernels for current Kaggle competition unrelated to classification that takes no more than ten lines of code. The data block API in particular allows you to gather your data in a very flexible way.
As always, the model or loss function you put with your data in a
Learner can be a regular pytorch model that you wrote yourself or found online, so I’m not sure how that’s limiting you compared to keras and/or pytorch.
Thank you very much, I think that the library and the course are amazing.
I am just a bit frustrated from the fact that when I am trying to design a whole pipeline,
which need to include some modifications on the models and the data flow, I have got confused on how to tailor it within the Fast.ai . Where can I find examples of using pytorch-built model to train a detector ? (for models that are not in the standard fast.ai / pytorch zoo)
I’m working on a fastai v1 YOLOv3 implementation that I’ll publish sometime in the next few months.
To point you in the right direction, you’re going to have to write your own data_block that can correctly parse the outputs of the model. If you’re trying to adapt an existing Pytorch implementation, be prepared to really dig into the model so you can cleanly separate the loss function from the body of the model. Hope that helps.
Any update Sir?