Pre-sizing and DataBlock for segmentation (and more!)

I wanted to use pre-sizing for a segmentation task. So I used DataBlock api with ImageBlock and MaskBlock. In item_tfms I used Resize(360) and then in the batch_tfms I used aug_transforms(size=224, min_scale=0.75). When I checked the output of dls I found out that only my images were resized to 224 and my masks were still 360.
1- Is there any way I can do the same pre sizing with the exact randomness involved for my masks with DataBlock api?
2- Is it possible to use two different TfmdLists (one for imgs and one for masks and then merge them into dls) and keep the randomness in augmentation the same for imgs and masks for a segmentation task?

What’s the output of dls.summary?

Here it is :slight_smile:

That’s because you’re cropping. You shouldn’t be cropping in a segmentation problem, that’s why it won’t work. You should instead use Resize. Pre-sizing isn’t quite the same with segmentation. We’re playing around with this right now because out labels are half the size of our training data. So doing 960 -> 480 -> 960 (via 3 Resize transforms) shows a large improvement.

But again, the Presizing technique does not apply with segmentation. You’re cropping out valuable data


So you mean we cannot use presizing for segmentation at all? Because I assume the second resize without cropping would be meaningless when there is another resize one step before

No, not natively. It solves a different goal. In regular classification we look at the overall pixels as a whole (to a certain extent). In segmentation it’s pixel-wise

You could certainly @patch something to RandomResizedCropGPU to get it working and report back

Aha, I got it. You’re right. It’s way a different story now I think.
Thanks for your help

I will give it a try! :slight_smile: