Generate Bounding Box - which format

I have a new dataset of images. What format should my bounding box file be in so that it can “easily” be read in using Fast.ai’s libraries? Has anyone used a tools such as LableImg for this?

1 Like

I have - a working example of annotating images with bounding boxes using sloth and training with fastai can be found here.

The earlier notebook contains information on using sloth for annotation.

I have not figured out how to use this to run inference on new images :slight_smile: I probably could load them manually, normalize them and just call the model on them, but I have not had the time to try.

Thank you. I will read over it and give it a try with my images.

1 Like

Hi, went through some of your notebooks. More specifiically, I am looking at your fluke detection redux notebook, I was wondering if you were able to get_preds for the rest of training set images that was not used in the train/validation? So getting predictions for the test set which doesn’t have any bbox annotation. I get error for EmptyLabel which is unsubscriptable. Here is a test script

I have not tried running inference yet. I have thought about it only a little and I do not have a good suggestion how to run the model on new data.You could do something tricky, like split the files you want to infer on into two groups and assign first the first group than the second to the validation set, and run inference on that. I think this should work.

I also suspect that you might create a databunch without a train set - assign everything to the validation set. But not sure if this will work.

I have no clue if this will work - just trying to be helpful.

A way that would work for sure would be to set the model to eval, read images manually, normalize and call the model on them. Have not attempted that yet so have no code to share.