Just wanted to share my code for adding advanced data augmentation into fastai training pipeline. I really wonder if there is a more elegant way to do it?
Libraries like albumentations or imgaug require PIL like(or CV2 converted to RGB) image array. But fastai augmentations work with tensor. So we have to convert the data back and forth. Also, there is some wrapping required by fastai.
That’s neat! Last time I used albumentations with fastai, I created a data bunch using my custom Dataset class. Will play with this approach soon!
Thanks for sharing
@sayakgis The albumentations library does support this. In the code you will have to replace
alb_tfm(image=np_image)
with
alb_tfm(image=np_image, mask=???)
So the question is, how do you get the mask? I can see that fastai’s method transform() has tfm_y argument. Maybe if you set it as True you will get the mask passed to your transform method. But I am not sure, never tried that.
@serhiy
how about the normalization part here… i see you are multiplying only 255 to image . I presume tfms will send the normalized image to albu2tfms