[fastbook] Unexplained Randomness in Data Augmentation

I am reviewing 05_pet_breeds.ipynb from the fastai book and trying out some of the code snippet. When I went through the code snippet on Presizing, I get a randomness on data augmentation that I can’t explain.

Reproducing the Issue

from fastai2.vision.all import *
path = untar_data(URLs.PETS)

dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
                   get_y=parent_label,
                   item_tfms=Resize(460))
dls1 = dblock1.dataloaders([(path/'images'/'Ragdoll_202.jpg')]*100, bs=8)
dls1.train.get_idxs = lambda: Inf.ones

SIZE = 224

x,y = dls1.valid.one_batch()
_,axs = subplots(1, 2, figsize = (16, 8))

x1 = TensorImage(x.clone())
x1 = x1.affine_coord(sz = SIZE)
x1 = x1.rotate(draw = 30, p = 1.)
x1 = x1.zoom(draw = 1.2, p = 1.)
x1 = x1.warp(draw_x = -0.2, draw_y = 0.2, p = 1.)

tfms = setup_aug_tfms([Rotate(draw = 30, p = 1, size = SIZE), 
                       Zoom(draw = 1.2, p = 1., size = SIZE),
                       Warp(draw_x=-0.2, draw_y=0.2, p=1., size = SIZE)])
x = Pipeline(tfms)(x)
#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
TensorImage(x[0]).show(ctx=axs[0])
TensorImage(x1[0]).show(ctx=axs[1]);

Here is what I got when execute the code:

There are 2 things I don’t understand:

  1. The transformation on left hand side (from batch transform) should align with that on right hand side (from manual transform), but it is not the case
  2. When I execute the code several times, I get different results. The randomness looks strange to me because (i) the data augmentation are all deterministic (rotation, zooming, warping all with p = 1), (ii) the sample is from validation set so Resize(460) should deterministically crop at center instead of at random place.

Could any fellows give me some clues on that?

Finally found out the answer, the randomness comes from the zoom operation.

When zoom is disabled on both side, the results are deterministic and roughly the same (I don’t know what causes a minor difference, looking for an answer for that). Here is what I got after disabling zooming on both side:

from fastai2.vision.all import *
path = untar_data(URLs.PETS)

dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
                   get_y=parent_label,
                   item_tfms=Resize(460))
dls1 = dblock1.dataloaders([(path/'images'/'Ragdoll_202.jpg')]*100, bs=8)
dls1.train.get_idxs = lambda: Inf.ones

SIZE = 224

x,y = dls1.valid.one_batch()
_,axs = subplots(1, 2, figsize = (16, 8))

x1 = TensorImage(x.clone())
x1 = x1.affine_coord(sz = SIZE)
x1 = x1.rotate(draw = 30, p = 1.)
#x1 = x1.zoom(draw = 1.2, p = 1.)
x1 = x1.warp(draw_x = -0.2, draw_y = 0.2, p = 1.)

tfms = setup_aug_tfms([Rotate(draw = 30, p = 1, size = SIZE), 
                       #Zoom(draw = 1.2, p = 1., size = SIZE),
                       Warp(draw_x=-0.2, draw_y=0.2, p=1., size = SIZE)])
x = Pipeline(tfms)(x)
#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
TensorImage(x[0]).show(ctx=axs[0])
TensorImage(x1[0]).show(ctx=axs[1]);

Appendix on Related Source Code

@delegates(zoom_mat)
@patch
def zoom(x: (TensorImage,TensorMask,TensorPoint,TensorBBox), size=None, mode='bilinear', pad_mode=PadMode.Reflection,
         align_corners=True, **kwargs):
    x0,mode,pad_mode = _get_default(x, mode, pad_mode)
    return x.affine_coord(mat=zoom_mat(x0, **kwargs)[:,:2], sz=size, mode=mode, pad_mode=pad_mode, align_corners=align_corners)

# Cell
def Zoom(max_zoom=1.1, p=0.5, draw=None, draw_x=None, draw_y=None, size=None, mode='bilinear',
         pad_mode=PadMode.Reflection, batch=False, align_corners=True):
    "Apply a random zoom of at most `max_zoom` with probability `p` to a batch of images"
    return AffineCoordTfm(partial(zoom_mat, max_zoom=max_zoom, p=p, draw=draw, draw_x=draw_x, draw_y=draw_y, batch=batch),
                          size=size, mode=mode, pad_mode=pad_mode, align_corners=align_corners)

as you can see, both the batch transform Zoom and TensorImage’s method zoom (functionality created by @patch mechanism) makes use of the function zoom_mat. This function essentially outputs a random matrix for the zooming operation:

def zoom_mat(x, max_zoom=1.1, p=0.5, draw=None, draw_x=None, draw_y=None, batch=False):
    "Return a random zoom matrix with `max_zoom` and `p`"
    def _def_draw(x):       return x.new(x.size(0)).uniform_(1, max_zoom)
    def _def_draw_b(x):     return x.new_zeros(x.size(0)) + random.uniform(1, max_zoom)
    def _def_draw_ctr(x):   return x.new(x.size(0)).uniform_(0,1)
    def _def_draw_ctr_b(x): return x.new_zeros(x.size(0)) + random.uniform(0,1)
    s = 1/_draw_mask(x, _def_draw_b if batch else _def_draw, draw=draw, p=p, neutral=1., batch=batch)
    def_draw_c = _def_draw_ctr_b if batch else _def_draw_ctr
    col_pct = _draw_mask(x, def_draw_c, draw=draw_x, p=1., batch=batch)
    row_pct = _draw_mask(x, def_draw_c, draw=draw_y, p=1., batch=batch)
    col_c = (1-s) * (2*col_pct - 1)
    row_c = (1-s) * (2*row_pct - 1)
    return affine_mat(s,     t0(s), col_c,
                      t0(s), s,     row_c)
1 Like