My question arises from a problem I have.
We are looking to annotate images to segment, we have only 3 labels: class1, class2 and background.
( just to be clear, as there are many definitions of segmentation, each image pixel has only one label).
What I would like to do, is to Active Learning, so have the model in the loop, having this has (at least for me) two great advantages:
- to help me find the hardest examples (which images I need to annotate next to improve my model performance.
- Generate initial segmentation masks to “correct” an make the annotation process simpler. Instead of starting from a clean canvas.
Right now, I have 600 annotated images, and the fastai unet gives very good results. This initial images where annotated by another team, and have pretty low res. Also, they are provided as PNG files (with 0,122,255) values.
- How would you transform the output of fastai unet to make it compatible with annotation tools that expect COCO or JSON polygon regions.
- What tool to use to do this, the simplest appears to be Label Studio, but is very basic and excruciatingly slow! CVAT is another tool we are testing.
- Is this necessary or do you know a tool that can work directly with the Grayscale PNG image?
I want to be sure that the file format chosen to save annotation and the tools are up to the task.
I want feedback on how would you do this.