Incorporating ImgAug with FastAI 0.7

Hello, I am trying to incorporating ImgAug into FastAI pipeline.

I have encountered two issues:

  • Slowness of training after augmentation
    Compared to FastAI native transformer. ImgAug pipeline makes training 5-6 time slower.
    I am wondering if this is an inevitable consequence of applying so many fancy transformations or I am doing something wrong
  • Augmented images are no longer properly normalized:
    I compared batches returned with/without ImageAug tfms.
    Max Min Mean STD all change drastically. I would like to know what I am I doing wrong in my code.

Million thanks for the Help !

My ImgAug pipeline(Copy straight from docs)

sometimes = lambda aug: iaa.Sometimes(0.5, aug)
seq = iaa.Sequential(
    [   iaa.Fliplr(0.5), 
        iaa.Flipud(0.2), 
        sometimes(iaa.Affine(
            scale={"x": (0.9, 1.1), "y": (0.9, 1.1)}, 
            translate_percent={"x": (-0.1, 0.1), "y": (-0.1, 0.1)}, 
            rotate=(-10, 10), 
            shear=(-5, 5), 
            order=[0, 1], 
            cval=(0, 255), 
            mode=ia.ALL 
        )),
        iaa.SomeOf((0, 5),
            [

                iaa.OneOf([
                    iaa.GaussianBlur((0, 1.0)), 
                    iaa.AverageBlur(k=(3, 5)), 
                ]),
                iaa.Sharpen(alpha=(0, 0.5), lightness=(0.9, 1.1)), 
                iaa.Emboss(alpha=(0, 0.5), strength=(0.8, 1.2)), 
                iaa.SimplexNoiseAlpha(iaa.OneOf([
                    iaa.EdgeDetect(alpha=(0, 0.5)),
                    iaa.DirectedEdgeDetect(alpha=(0., 0.5), direction=(0.0, 1.0)),
                ])),
                iaa.OneOf([
                    iaa.Dropout((0.01, 0.05), per_channel=0.5), 
                    iaa.CoarseDropout((0.01, 0.03), size_percent=(0.01, 0.02), per_channel=0.2),
                ]),
                iaa.Invert(0.1, per_channel=True), 
                iaa.Add((0,1), per_channel=0.5), 
                iaa.AddToHueAndSaturation((0, 1)), 
                iaa.OneOf([
                    iaa.Multiply((0.9, 1.1), per_channel=0.5),
                    iaa.FrequencyNoiseAlpha(
                        exponent=(-1, 0),
                        first=iaa.Multiply((0.9, 1.1), per_channel=True),
                        second=iaa.ContrastNormalization((0.9, 1.1))
                    )
                ]),
                sometimes(iaa.ElasticTransformation(alpha=(0.5, 3.5), sigma=0.25)), # move pixels locally around (with random strengths)
                sometimes(iaa.PiecewiseAffine(scale=(0.01, 0.05))), # sometimes move parts of the image around
            ],
            random_order=True
        )
    ],
    random_order=True
)

FastAI transformer

class my_trans(Transform):
    def __init__(self, tfm_y=TfmType.NO):
        self.tfm_y = tfm_y
    def set_state(self):
        pass
    def do_transform(self, x,is_y):
        if is_y:
            return x
        else:
            x_aug = seq.augment_image(x)
            return x_aug

aug_tfms = [my_trans()]
tfms = tfms_from_model(arch, sz, aug_tfms=aug_tfms)
1 Like