SemTorch: A Semantic Segmentation library build above FastAI

Okey, thank you very much!

@rsomani95 @muellerzr Maybe it will take me some time to improve Mask-RCNN as I don’t have my RTX card right now, I was waiting to purchase RTX 3080, but there is no stock…

I’ll turn on a server on the cloud or use Google Colab. I will update here with my progress.

3 Likes

@WaterKnight I’m a bit rusty with the components of Mask R-CNN, but I think this code chunk is what you want for MobileNet-V2:

import torchvision.models as models
from torchvision.models.detection import MaskRCNN
from torchvision.models.detection.backbone_utils import BackboneWithFPN
from torchvision.models._utils import IntermediateLayerGetter

backbone = models.mobilenet_v2(pretrained=True).features
backbone.out_channels = 1280

model = MaskRCNN(
    #IntermediateLayerGetter needs the {'17': '0', '18':'0'} etc. references
    backbone = BackboneWithFPN(backbone, {'17': '0', '18': '1'}, [320, 1280], 256),
    .... #other args
)



1 Like

Also opened a feature request on torchvision for supporting a mobilenet backbone on these models

1 Like

First of all, I would like to point to the PanNuke dataset as another source for Image Segmentation experiments.

Regarding the library (thanks again for the great work!) I started to play a little bit and I was wondering (maybe naive, sorry!) how different is unet from SemTorch from unet_learner. Are they equivalent? Reason is that I am getting far more better results with SemTorch's following @WaterKnight unet notebook

Thanks!

EDIT: I found that both come from the same unet_learner so I guess is some hyperparameter stuff that I am missing

Thank you for share! What do you think fo waiting until it is merged to Torchvision, so it is easy to maintain?

Yes, my function is just a wrapper around that, so as you discovered it is related to randomness in training.

It contains all types of tumours, no?

My tumour dataset at Deep-Tumour-Spheroid was obtained from pictures obtained from microscopes in the Universesity of Zaragoza (Spain).

Thank you for taking your time looking at the library!

Yes, actually is a little bit different from your Deep-Tumor-Spheroid. Rather than micorcopy images from cells in suspension (spheroids) they are cells from human tumor biopsies, fixed and stained with Hematoxylin and Eosin. In this specific dataset, 5 different cell nuclei populations are segmented.

It’s quite a hard dataset, the authors of the paper report around 65% pixel accuracy so a lot of room to improve.

1 Like

Sounds interesting! Thank you for sharing!

UPDATE:

I haven´t got a RTX 3080 yet, no stock…

It is very hard to get one.

@rsomani95 @muellerzr I will update here when I bought the graphic and can continue with the library.

2 Likes

Hi, have anyone experiment with running semantic segmentation prediction with various input size and get the correct mask for it.
I trained deeplab3 model with fixed size of (75, 1075)
But i want to use it to predict a lot of different size inputs,
For example: (250, 4500), (350, 4500), …
The prediction results alway have size (75,1075).
I know in fastaiv1 we can do
learn.data.single_ds.tfmargs['size'] = None
Ref: Segmentation mask prediction on different input image sizes

But i dont know a good way to deal with this in fastaiv2.
Thanks

1 Like

I’m trying to get my hands on a 3090 myself. If I get that, there’s no waiting :smiley:
But yes, what you said makes sense, though I think it’ll be a while by the time (and if) they get around to doing that.

1 Like

Yes, Good luck with the 3090 @rsomani95 ! There is very little stock right now!

Maybe you want to reescale the output. Maybe @muellerzr can give you a better solution!

I would post this in a separate thread as it is off topic :slight_smile:

1 Like

MadeWithML referenced my library. Feel free to upvote :smiley:

5 Likes

Got one, but it isn’t compatible with fastai or the latest stable pytorch release either :confused:
More on that here: RTX 3090 / Torch 1.8 (Nightly) Compatibility

Hi, I try to add the danet to Semtorch(https://github.com/junfu1115/DANet/blob/56a612ec1e/encoding/models/sseg/danet.py), but as the following picture show that the author combine three outputs together

. I try to using only x[0],but every time I try to use the lr_find I get different result and when I train the model, I always get the wrong result. Have you ever considered add the danet into the Semtorch or any suggestions?Should I change the loss function?appreciate for the answers!image

Sorry for the late reply @bowenroom. I have hard a hard week.

I have considered adding new architectures. I can help you get this one added.
Can I close the issue and keep throught here the discussion? I need to take a look at this architecture first!

My package makes use of other archs that returns a tuple or list too.

great appreciate for the answer, appreciate for all the effort you have done, thanks :innocent:

I am curious if you labeled the data yourself. I am curently looking a different tools to do manual labeling (ideally with the model on the loop) and for semantic segmentation is a mess, they are super slow or they generate messy jsons.
I tried:

  • Label Studio, it is utterly slow and the interface glitches a lot.
  • Currently using CVAT, but I would really like a non-brush tool, to paint the pixels instead of the box.
    Do you recommend a particular encoding for this task? COCO, etc…?
1 Like