Ok, so i’m trying to get my head around the data blocks API. The docs are good - i’m not complaining however i’m struggling to get my head around how i can utilise it for a coco-like dataset. I come from a stats background so my programming skills are not super strong so using datablocks is a little bit of a learning curve for me but i can see the power in this approach so i would really like to understand it.
Im using data annotated with VIA exported to json with polygons and binary cateogry (object, background). My goal here is to utilise this for lesson 3 and play around with U-net. I’m wondering how i reconcile what i want to do (masks) with json format data. Do i need to extend the segmentationlabellist class?
edit: i could potentially try to re-annotate them in a different way although the use case is segmentation so i’m guessing that limits alternatives?
Thanks @muellerzr - just so i can understand here are you suggesting i just plug in my dataset to this notebook as i cant see anything in here that could help me convert the json data to a similar style dataset to camvid (png and txt file).
My goal is object segmentation and i really feel like u-net is the way to go so i’m not sure using the data in the format i have with a different network is the best way to go?
ok so i think i have found a bit of a different fix for anyone else who comes up against this.
I looked at a few different annotation tools - for me using OXS and mojave it seems like the best route ahead is rectlabel. I tried a whole bunch and found a bunch of issues in terms of some not being supported by mac, others are cloud based (not suitable for requirements for our dataset) and others where buggy. Rectlabel has been the best so far (see link below).
Now, i have the originals and i have the png masks. I dont have a categories txt file, from what i can see it looks like its simply a case of just creating / editing my own based around the labels i am segmenting and then saving as codes.txt. E.g “64 128 64 Animal”
Also, i need to make a txt file with the validation image paths in. At that point i should be able to use the notebook from lesson 3 on my own dataset for image segmentation. I will update because from what i can see this seems to have cropped up a couple of times.
We’re trying to do something similar to your model. We are using TACO dataset, which is an open image dataset of waste in the wild, based in COCO format. We want to use also what is shown in lesson 3 and U-net, which could be interesting for such dataset. As you, we face the problem of adapting COCO format with annotations in json. I reviewed your conversation but I don’t know if you success at it with your dataset. If your answer is affirmative, how did you got it? We’re a little blocked at this point and some notions may help us a lot!