@rsomani95@muellerzr Maybe it will take me some time to improve Mask-RCNN as I don’t have my RTX card right now, I was waiting to purchase RTX 3080, but there is no stock…
I’ll turn on a server on the cloud or use Google Colab. I will update here with my progress.
First of all, I would like to point to the PanNuke dataset as another source for Image Segmentation experiments.
Regarding the library (thanks again for the great work!) I started to play a little bit and I was wondering (maybe naive, sorry!) how different is unet from SemTorch from unet_learner. Are they equivalent? Reason is that I am getting far more better results with SemTorch's following @WaterKnightunet notebook
Thanks!
EDIT: I found that both come from the same unet_learner so I guess is some hyperparameter stuff that I am missing
Yes, actually is a little bit different from your Deep-Tumor-Spheroid. Rather than micorcopy images from cells in suspension (spheroids) they are cells from human tumor biopsies, fixed and stained with Hematoxylin and Eosin. In this specific dataset, 5 different cell nuclei populations are segmented.
It’s quite a hard dataset, the authors of the paper report around 65% pixel accuracy so a lot of room to improve.
Hi, have anyone experiment with running semantic segmentation prediction with various input size and get the correct mask for it.
I trained deeplab3 model with fixed size of (75, 1075)
But i want to use it to predict a lot of different size inputs,
For example: (250, 4500), (350, 4500), …
The prediction results alway have size (75,1075).
I know in fastaiv1 we can do learn.data.single_ds.tfmargs['size'] = None
Ref: Segmentation mask prediction on different input image sizes
But i dont know a good way to deal with this in fastaiv2.
Thanks
I’m trying to get my hands on a 3090 myself. If I get that, there’s no waiting
But yes, what you said makes sense, though I think it’ll be a while by the time (and if) they get around to doing that.
. I try to using only x[0],but every time I try to use the lr_find I get different result and when I train the model, I always get the wrong result. Have you ever considered add the danet into the Semtorch or any suggestions?Should I change the loss function?appreciate for the answers!
Sorry for the late reply @bowenroom. I have hard a hard week.
I have considered adding new architectures. I can help you get this one added.
Can I close the issue and keep throught here the discussion? I need to take a look at this architecture first!
My package makes use of other archs that returns a tuple or list too.
I am curious if you labeled the data yourself. I am curently looking a different tools to do manual labeling (ideally with the model on the loop) and for semantic segmentation is a mess, they are super slow or they generate messy jsons.
I tried:
Label Studio, it is utterly slow and the interface glitches a lot.
Currently using CVAT, but I would really like a non-brush tool, to paint the pixels instead of the box.
Do you recommend a particular encoding for this task? COCO, etc…?