Prediction on whole slide histology images

(Joan Gibert Fernandez) #1

Hi!

I am completely new to this forum so apologize in advance if someone asks something similar (didn’t find any).

I am starting to work with whole slide histology images with different classification purposes (tumor type, subtype, etc). As some of you already know, the files that you manage are extremely high resolution (sometimes 100,000x100,000 pixels and 10GB+). So, as you imagine, I am tiling the image to 224x224 size patches BUT I am kind of stucked with the labeling and/or the patch prediction aggregation. Since I do not have enough time to manually curate every patch and see if the sample have or not tumor tissue, I am more prone to use the second approach (differently from the breast dataset in kaggle). In order to do this I found this approach linked from this post that could solve my problem (the whole git is based on PyTorch, so great point here).

However, I would like to know if this could be implemented using fastai library. Any advice on this?

Thanks!
J

0 Likes

(Joan Gibert Fernandez) #2

Hi @imrandude,

I found your post some hours ago.

How did you implement fastai pipeline using whole slide images? Did you tile and input them as a databunch?

Thanks!
J

0 Likes

(Joan Gibert Fernandez) #3

As an update on the issue, I found this thread in tensorflow forum. Seems that decreasing batch size to 1 and with a CPU, you could input kind of a big image without tiling.

Guess that it would take forever to train but I would give it a try. Any thoughts on this?

0 Likes

(Imran) #4

What is the type of tissue sample you are working with? Is your requirement to create a mask on whole slide images or any other?

0 Likes

(Joan Gibert Fernandez) #5

Hi,

I am actually doing quite a “simple” binary classification using TCGA whole slide images from Digital Slide Archive. Concretely, I want to build a CNN that differentiates LUAD from LUSC WSI.

As you may imagine, the WSI have not only tumoral tissue, so I am not sure which is the best way to 1) tile the WSI (if necessary) 2) train 3) predict and 4) merge predictions in a meaningful way (there is a great post about this in my first comment)

Any thoughts? :slight_smile:

0 Likes

(Imran) #6

Just in case you missed it from the original medium post, the code below is the implementation.

Now I have not tried this with FastAI, but given that the paper has used ResNet for patch classification, we can use FastAI for much more effective training.

0 Likes

(Joan Gibert Fernandez) #7

Yes, I end up in the post because of the paper published with this git. I guess that I could try to tile the images and use them for the training using fastai library.

However, my first post was in the direction on how to manage the prediction of the tiles in the test set in fastai. Since the WSI have non-tunoral tissue, I was wondering if fastai has some kind of function to deal with this ‘merging predictions’ issue. Do you have any idea?

Hope now is clearer what I was asking and thank you for the feedback!

0 Likes

(Imran) #8

Hi Joan,

Hope you have found a solution, if not you can refer to the code here (https://github.com/SBU-BMI/u24_lymphocyte) for the paper: https://arxiv.org/abs/1905.10841

0 Likes

(Joan Gibert Fernandez) #9

Hi @imrandude,
Thanks for the paper! Actually, this is similar of what DeepSlide does but generating kind of better heatmaps for interpretation.
However, I was looking for something that unbiasly interpret this heatmaps. Imagine that, in this article, it would be a way to discern if the TILs are inside the tumor and this was a relevant clinical feature.
In my specific example, I was looking for some script that, after the predictions, postprocess them and give some new feature taking into account relative positions (which would be what the pathologist would do in the end).
I guess this is specific for every problem that you face but I was looking for some examples that I guess are not out there yet.
In any case, thank you for your help, I hope I will post some update on this at some point :smile:

1 Like

(Joan Gibert Fernandez) #10

Hi @imrandude,

In this work, the authors used a RNN to solve this problem. Hope it would be helpful.

Cheers

0 Likes