Data normalization fails with tfms

Hello! I have some 33x33 images with 2 channels and I want to build a binary classifier. Everything works fine unless I add data augmentation (which I really want to add as I don’t have that many images anyway). So when I don’t add tfms I have this:

data = ImageClassifierData.from_arrays(PATH, (X_train,Y_train), (X_val,Y_val), bs=256)
x,y=next(iter(data.trn_dl))
x.size()

which gives me:

torch.Size([256, 2, 33, 33])

When I add tfms:

stats = (np.array([ 0.000918273 ,  0.01935371]), np.array([ 0.01669360,  0.1687338]))
tfms = tfms_from_stats(stats, sz, aug_tfms=[RandomFlip()], pad=sz//8)
data = ImageClassifierData.from_arrays(PATH, (X_train,Y_train), (X_val,Y_val), tfms, bs=256)
x,y=next(iter(data.trn_dl))

I get this error:

operands could not be broadcast together with shapes (33,33,33) (2,)

which seems to come from fastai/transforms.py, in this part of the code:

def call(self, x, y=None):
x = (x-self.m)/self.s
if self.tfm_y==TfmType.PIXEL and y is not None: y = (y-self.m)/self.s
return x,y

The same error appears also when I run learn.fit(). Can someone tell me what is going on? Why does the shape of my tensor changes from (2,33,33) to (33,33,33)? Thank you!