Segmentation in Medical Images using pixel coordinates as labels

I have a small task here where I would like to segment mitosis in a medical images dataset, but the dataset is a bit different compared to the normal camvid or the common datasets you find online.

The dataset contains input images, annotations and ground truth. The input image is very large 4000 x 4000 which I would have to resize for sure. But my major concern is the annotation.csv file as it has some rows and columns. Here each row has the pixels co ordinates which need to be segmented.

eg: IMG 1 has an annotation.csv file with 3 rows which has values something like this. (many nan values as well for pixels which are normal, only the pixels which need to be segmented are included in this i guess). This forms a small region in the image.

1877 8 987 667 8 98 778 … Nan Nan
3447 99 77 433 3 66 … Nan Nan
333 555 7 55 66 777

How to use the pixel coordinates as labels for training data?

Any help or suggestions here would be greatly appreciated

You could write a custom loader - see the source code for RLE masks in the library as a helper.

Me, I’d just rasterise the coordinates into an ordinary png mask.

1 Like

Thank you very much for the reply. I will look into the RLE mask code in the library. Can you please explain a bit more what you meant by rasterise the coordinates into an ordinary png mask.

Regards,
Sahil

I don’t know what format your annotation.csv is. But whether it is x/y/class, RLE, polygons, point clouds, or something else, but using PIL/cv2/np you should be able to construct a mask.png of the same size as the input image by turning the x/y, RLE, or polygon descriptors into pixels with a class value. And then train on input.png and mask.png.

I would really appreciate some help here(if possible something to read or a code snippet)

If the image is 2024 x 2024 size. the annotation.csv is given for each image which has rows and columns. For image 1 it has 2 rows -

158 23 78 67 44 66 345 …
678 898 66 765 44 555 …

2 x 876 is the size of the csv

These values are the coordinates of the mitosis regions in the image. I am still not able to get how to create masks using this. I tried using the " open_mask_rle" from the library to check if these values were RLE but it doesn’t work. I am not sure how to proceed.

hello @digitalspecialists,

I tried using numpy. I used something like this:

import matplotlib.pyplot as plt
import numpy as np

L = [234, 78, 56, 98, 555, 876, 998....]  #putting all the values in a single list
n = np.zeros((2084, 2084), dtype="uint8") #created a input size matrix
n.ravel()[L] = 255  #added 255 to it so as to make the pixel white

plt.figure(figsize=(20,12))
plt.imshow(n, cmap="gray", vmin=0, vmax=255)

After doing this as well I get the image but dont see anything in the Image(no white pixels), this might be because of the resolution, also if I reduce the size of the matrix to supposedly (224, 224)… what I get is a continous line of white here and there.

I dont know if this is the correct masking I hould be suing as an input.

Regards,
Sahil

I agree with @digitalspecialists. I will also create mask file for each image and then use segmentation API.

I guess first step is to create correct mask file.
You have to make sure how to interpret values in csv file. (If they are arranged in column first or row first format).

I created a fake image and fake mask using part of your code and it seems to doing alright.

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

data = np.zeros([32,32],dtype='uint8')
data[10:20,10:20] = 255 # created fake data

plt.imshow(data, interpolation='nearest')
plt.show()

data.ravel().shape
L = range(48,100)  ## my fake mask co-ordinates
data.ravel()[L] = 100 ##borrowing your code

#checking where mask was created...
plt.imshow(data, interpolation='nearest')
plt.show()

Hello, if you need to convert position images into classic images maybe these kernels from Kaggle can help you https://www.kaggle.com/c/quickdraw-doodle-recognition/kernels
It was for a classification competition but just look to the ‘display images’ kernels.
Then since it’s a binary classification that you want to do, just put 0 for non mitosis pixel and 1 for mitosis pixel. Then you will have your mask.
Do maybe a code.txt file in which you put the names of the classes.
And after I think it will fit in the code of the lesson 3.

Thank you Matthieu and everybody who showed interest in helping out with this issue. But I think I found a solution. Please find a small snippet of code which can be used when we have label data in this pixel coordinate format.

for k in range(len(image_paths)):  #going through the folder where I have all the Images stored

    image = plt.imread(path+image_paths[k]+".png") #reading the image
    print(image_paths[k]+".png")

    image=rgb2hed(image) #As the images are H&E - Hamotoxylin and Eosin stained images, we change it to this format (I also discovered about this while browsing some of this content on skimage) something new ;P

    scipy.misc.imsave(os.path.join(path_train, image_paths[k]+".png"), image)
    X_train.append(image[:,:,0:3])

    height=image.shape[0]
    width=image.shape[1]
    patch = np.full((height,width),255)

   #This is the part which helped me take those pixel values provided and create a patch like structure in the image. 
    f = open(path+image_paths[k]+".csv", "r")

    for line in f:
       lines=line.split()
       pixels=lines[0].split(",")

       for i in range(0,len(pixels),2): #putting 2, as 2 values denote x and y coordinate(lucky guess)
             patch[int(pixels[i+1])][int(pixels[i])]=0
        
    scipy.misc.imsave(os.path.join(path_label, image_paths[k]+".png"), patch)  #saved it as a patch and it worked 

Apologies if If the information provided in the first place was not that suffecient.

Regards,
Sahil

1 Like

Sir, I am also working on mitosis detection using the MITOS dataset. I am just stuck over extracting cell patches which I will use for feature extraction. Any help or suggestions would be greatly appreciated.

Sir, can you please explain difference in image_paths and path?

Hello Adarsh,

Sorry for the confusion, but the path is my general path which goes to the images present in the folder and the image_path is where I had given the names of the images i want to use.

eg - image_paths=[‘A00_01’, ‘A00_02’]

Hence the line of code:

image = plt.imread(path+image_paths[k]+".png") #reading the image

you see I am reading from the “path” + the name of the images i have given in image_paths + .png. Which dataset are you using for this 2012 detection or something else?

Regards,
Sahil Karkhanis

Sir, I am using 2014 detection dataset. Please share a link so that I can get description of dataset you used, for taking benefit of above code. And if you could share your whole code It will helpful for me in mitosis detection problem. Thank You.

@sahilk1610
hi
Currently I am working on a task some how similar to yours but my task is to detect nuclei in images. The data set that I am using consists of images and ground truth in form of pixel coordinates of the nuclei. For each image there is a .csv file containing N rows and 2 columns , which is actually position of nuceli. There are total 100 images and 100 .csv files as ground truth.
I am not sure how to train a model with these groundtruth, whether convert into binary images and train model with input images and the binary images or to combine .csv files into one and train model with it.
Your help will be highly appreciated

Hello Sarah, @sarah1

Apologies for the late reply but you can find the snippet I have shared above is something which can solve the problem for you.

Please try it and if you face any problems feel free to tag me here again.

Regards,
Sahil

Hi sahil
Thanks for the reply. I am currently working on my thesis in cell detection and classification and am really stuck , is there a way that I can get help from. If you can in a way help me out I shall be very thankful. It will mean a lot to me.

regards

You have the coordinates of the nucleus center, thus not the pixel mask as in the MITOS dataset. Missing information on the nucleus shape, you may create a bounding box for each nucleus, or create a mask, or extract a subimage, centered in those coordinates and with a size able to embed at least one and possibly no more than one nucleus. The specific size depends on the tissue/cells you are investigating.
Careful that nucleus≠cell (you tell nuclei detection in one message, cell detection in the other).