Deep-Tumour-Spheroid: Segmentation of Brain Tumours with FastAI & SemTorch

Hi,

I am happy to announce that I have published my final degree project: Deep-Tumour-Spheroid

Deep Learning is making a big impact in areas such as autonomous driving, medicine and robotics among others. In medicine, it is helping doctors to diagnose patients more accurately, make predictions about patients’ future health, and recommend better treatments.

This project contributes to an important field like medicine. In particular, the aim of this project is to improve the automatic segmentation of Glioblastoma Multiforme Tumors (GBM) by using Deep Learning. GBMs are the most frequent, aggressive and lethal of all primary brain tumors.

This project presents a comparison of different Deep Learning architectures that could be employed to solve this problem. All of them trained in a similar way by using the PyTorch and FastAI2 libraries. The best model obtained by those architectures can be used through a web application developed specifically for this task.

As a result of this project, the models can segment the tumors in an autonomous way, reducing the work of researchers. Therefore, they can focus on what is important: try different treatments to beat GBM tumours.

I tried the following architectures: UNet, DeepLabV3+, HRNet Seg, Mask RCNN and U²-Net. I used SemTorch for training them easily.

I have another post for speaking about this package

Training Notebooks

Notebooks can be found here

The best model generalizes pretty good.

Python Package

I created a Python app that allows doctors to segment tumours easily.

.

It can be found here

Web App

I also created a Web App. It can be found here

13 Likes

Fantastic work! Thank you for sharing. :clap:

1 Like

I’m glad you find it interesting!

Great work. Recently I had to use U2Net independently and then use fastai for classification. Thanks for doing this

1 Like

Thank you very much! Feel free to share it or give star on GitHub if you want. It’ll help me a lot!

Why am I just finding this just now? Thanks and I’m trying this out as a last resort since everything is failing with my satellite image classification. :sweat_smile: :sweat_smile: :sweat_smile:

Please, I’m trying to train the U-net model for image segmentation (See this notebook). However, I’m facing a problem with how to make predication on the testing set. I’ve tried this code and get this error. Any suggestions?

for file in os.listdir(dst_folder):
img = cv2.imread(dst_folder+file)
mask = learn.predict(img)
segmentaion = mask[2][0].numpy()
segmentaion = 255-segmentaion*255
cv2.imwrite(out_pred_samples_mask+’/’+file, segmentaion)

This is the error I’v got:

ValueError Traceback (most recent call last)
in ()
1 for file in os.listdir(dst_folder):
2 img = cv2.imread(dst_folder+file)
----> 3 mask = learn.predict(img)
4 segmentaion = mask[2][0].numpy()
5 segmentaion = 255-segmentaion*255

20 frames
in encodes(self, x)
3 pass
4 def encodes(self, x):
----> 5 img,mask = x
6
7 #Convert to array

ValueError: not enough values to unpack (expected 2, got 1)

FYI, I had to make some changes to the channel numbers for deeplabv3+ with resnet34 and resnet18 backbones, otherwise a shape mismatch error comes up. I have submitted a pull request. Please do double-check it : )

And thanks for your great work!

Yijin

Can’t really tell what’s going on from your copy-paste error message snippet. Please post your code/notebook so that people can help have a look.

Yijin

Many thanks for finding this issue, @utkb ! Makes my day.

And many thanks for @WaterKnight to integrating it already. I noted, however, that the pip-install’ed version still contains the bug. You might want to upgrade pypi.org accordingly.

Awesome library!

1 Like