Advice for working on TEM images of kidneys

Hi,

I am a novice working through the Deep Learning for Coders book.
I want to work on transfer electronic microscopy images.

I have realized that annotating them is a very time consuming so I want to try to build a model to segment the images. I am a bit overwhelmed and so some advice to get me going would be much appreciated. As an aside, these images do not have color and are currently in a TIF format.

Any help is appreciated. L

Hi there,

Training U-Net for segmentation requires pixel maps (where each pixel has a class). If you are patient you can draw them and try to start with a low number 20 or 30 or even 10 and use transfer learning. By the way what exactly would you like to segment? Is it a binary segmentation problem or do you have multiple classes?

If you don’t want to draw masks you can still try point regression to find key points in an image or object detection (train a neural net to detect bounding boxes around an object). For the later fastai unfortunately does not have support yet, but there are other open source libs.

Hope that helps.

1 Like

Dear jc-denton,

Thanks a lot. I have annotated substantial parts of the images with a brushtool. So I did not use boxing. I guess I could give all the non annotated parts a generic annotation.
I have not yet 10 images but I am getting there. Annotating one image is a hefty job as I try to annotate at a pixel level tiny parts (in a given image there are many tiny parts).

I have 3 classes I’d like to annotate, but I could drop one if needed.

Any other help is appreciated. This is all pretty overwhelming for a novice -and exciting.

Kind regards, Louis

How big are your images and what are you trying to segment? If the images are especially large and if the features are are trying to segment can be done at a zoomed in scale (without use of global features), then I think having a few cases is enough, because you can then divide the images up into patches and train a segmentation model on the patches. Then when you are done training, you can applying to the patches and stitch them up into a final segmentation map for the whole image.

2 Likes

Dear Tanishq,

Thank you. I have been thinking about such an approach. The images are 4080 x 3072 pixels. I am thinking about cutting these into more or less 1000 by 1000.

Another thing I am struggling with is how much to annotate and more importantly whether to annotate the things I am not interested in.

My reasoning is that annotating the things I am not interested in might help:
I am interested in object A. However, object A is always found on one side of object B. On the other side we see sometimes object C.
I am not interested in object C. But if you recognise C, then you know that A should be on the other side of B. (I assume that B is in fact not too hard to recognize).

In this situation it seems to me that annotating object A, B and C is relevant. But I am not sure.

As mentioned, I am a bit of a newbie and the annotation part is really pushing me in all directions.

Best wishes, Louis

1 Like

Again, it depends on the size of the features in your image, but you might want to cut it into smaller patches, like 512x512 or even 256x256. This could give you up to 180 patches per image, so just a few images would be probably enough for successful training.

I would recommend starting out by only labeling the object of interest. The CNN could potentially learn the correlation that the object B and C structures are near object A without directly seeing annotations of these objects.