Counting segmented objects

Hi everybody,

I asked a similar question before but didn’t get any responses – I’m not sure if that’s because my question was too vague, was posted at the wrong time, or wasn’t answerable.

So I’m sort of re-wording it and posting it again in hope of pointers.

Here’s my machine learning problem:

I have images of petri dishes containing creatures (sea lice). I’ve segmented all of the lice in each image (using labelbox).

Each louse has the label of ‘louse’, but also the further label / sub-classes of:

  • “Male” or “Female”
  • “Preadult” or “Adult”

I want to figure out how to achieve the following things using fastai:

  • Count how many lice are in each image
  • Learn their correct ‘sub classes’ (eg, "This louse is ‘Female’ and ‘Adult’ ")
  • Calculate the mean grey value of each louse (convert to 8bit greyscale – to measure ‘how light/dark’ the louse is)
  • Try to calculate the louse size/area

Does anybody have any pointers for how to start on this (aside from preparing the data which I’m already doing) ?

I’ve seen that there are some non-fastai libraries that might be able to handle this, but my fastai/Python skills aren’t advanced enough to integrate these libraries with fastai.

Any pointers would be really useful! Thanks in advance!

I assume you’ve labelled/segmented only a portion of your images using labelbox, and you are looking for machine learning to label the remainder? You could treat it as muliclass (classes == male preadult, male adult, female preadult, female adult) or multilabel (classes == male, female, preadult, adult). Or you could tackle it as two separate tasks, first identify gender then maturity. I’d go with the first option and use the fastai camvid notebook as a guide to how to use a Unet to segment images.

The other bullet points are “easily” treated without machine learning. That is, count the number of lice in an image, calculate the mean grey value of each lice, and calculate the louse size/area. You could use machine learning, but using cv2 is far easier in my view. Once you have a segmented mask from your camvid like model, you can use cv2 functions like findcontours and contourarea to get object stats, and then numpy to establish colour/grey stats. There is definitely a learning curve to cv2 and/or similar packages like scipy, but you probably need to learn it to get the most out of such segmentation tasks.

You could tackle it as an object detection problem (see [Kaggle] Discussion on Global Wheat Detection Challenge) and while it’d give you the number of lice, it wouldn’t cover what you need in terms of the light/darkmeasurement.

Now there are neural network approaches to counting/estimating objects. I might turn to them if I was counting people in a crowd, or oranges in a pile, or maybe rice grains on a plate. But if your lice are generally not touching one another much, or are more than say 60 pixels in size, and you don’t have hundreds of lice on a dish I’d stick to morphology (cv2). You can look at various kaggle competitions like https://www.kaggle.com/c/data-science-bowl-2018/notebooks to see what’s possible.

If this doesn’t help, see if you can post an image so people understand what about the task you are finding difficult. Each use case means diferent tools are most appropriate so it’s hard to give a general answer.

8 Likes

@digitalspecialists Thanks for the detailed, lengthy response.

Sort of! The real-life use-case of this neural net will be that the lab techs can just upload a photo of a particular petri dish, and instantly get back the results (A total lice, B of which are male, C of which are female, and of which D are preadult and E are adult).

The segmentation was just because I thought it’d make counting more accurate than using bounding boxes. But this might just be my lack of knowledge.

This has been my plan/approach to date, but what’s really confusing me is that CAMvid seems to only say “There are X bicycle pixels, in these locations. There are Y pedestrian pixels, in these locations”. It doesn’t count instances, as far as I’m aware.

And I’m struggling with adapting it to do that.

Brilliant! I had no idea about any of this stuff. I come from a React / JS / Node JS / PHP background, so I only really use Python for fastai stuff.

This could also be promising. Surely I could do all of it with this, because I could get the co-ordinates of the objects, then get the mean grey value based off of them? If it has to use bounding boxes, the mean grey value would be slightly affected by the background (non-louse) colour, but not by much.

Thank you – The issue with these, I find, is that most aren’t on fastai, and I have no idea where to start with implementing them in, or porting them to fastai.

Thank you, good suggestion. Here is one of the images.

So the output might be something like:

  • 33 lice
  • (say) 13 male preadult
  • (say) 3 male adult
  • (say) 16 female preadult
  • (say) 1 female adult
  • lice 1 mean grey value: X, lice 2 mean grey value: Y, … etc.

I just couldn’t find, in any example code, how instance segmentation could be implemented. I see image segmentation (not instance segmentation).

Thanks again for the super detailed response. I appreciate it.

1 Like

I’ve made a gist to point you in the right direction. As an aside, you can use code like this to speed up the labelling process, too.

3 Likes

You’re a genius.

Seriously though, this is insanely good. Thank you!

So, I get (with this code) how I can automate the segmentation/masks for each image now:

  • Save each object as its own .png image
  • Assign each object the class of ‘louse’
  • Manually label the training data with the correct sub-classes (M/F, preadult/adult)

Would you broadly agree with this approach?

This then leads me to the next stage, of training a neural network to do instance segmentation and learn the correct subclasses itself.

I saw this thread which seems really hopeful!

Did you have any general tips on instance segmentation in fastai? No worries if not – You’ve been incredibly helpful to me here, and I really appreciate it.

Thanks again.

1 Like

No- the code is what you can use on the output of a image segmentation CNN such as Unet to turn image segmentation into object segmentation. Because I don’t have a trained network for lice, I started by showing how you can quickly do image segmentation for a given image using saturation separation. But really, you should train a neural network (Unet) to do that.

Though use cases certainly exist, I’ve never had to use instance segmentation (eg mask-rcnn). I’ll take the higher accuracy of image segmentation (eg Unet) coupled with post processing, as in the gist to identify each object, almost every time.

1 Like

I see, okay.

So just to be clear:

You’re saying that the object segmentation NN would produce the equivalent of the coloured/binary regions (lice against blank background), and then the code could separate them out into individual instances, which could then be given to another NN to train on instance segmentation?

But that also, ideally a NN would do this? So one NN would spot all regions of interest (like the saturation separation code), and then assign labels to them?

You can put the jigsaw pieces in a pipeline however your task works best.

You might be able to do the task with one Unet NN that can distinguish between male adult, male preadult, female adult, female preadult. But they look very similar to me and the slow architecture of the model may struggle! But if you were identify bicycles, ships, cars, and rockets, it’d do just fine. You then take the outout and using morphology (separation of blobs) count them up.

Or you could make a single class lice segmenting NN, and use the give the detected lice images one by one to another NN to perform image classification (male adult, male preadult, …). Not instance segmentation.

Basically, just try a few things and see what works. fastai has enough example notebooks to experiment quickly in new areas.

3 Likes

Thanks for your continued thoughts and ideas on this. I appreciate it.

Just to be totally clear:

Would such a hypothetical pipeline (in your view) have non-neural net stages?

For example:

  • NN to find “Lice pixels and non-lice pixels”
  • Python code (probably CV2) to separate blobs and count them up
  • Another NN to classify each of these separate images as male adult, male preadult, female adult or female preadult

I think what I’m asking & seeking clarification on is:

Would the intermediate stages of processing be outside of a NN and effectively “manual” (normal Python code), and then the result of that is put into another NN? As opposed to some NN handling all of this.

Either way, you’ve given some excellent advice here, so I will experiment now!

2 Likes