[vision] Can I do a batch_tfm AFTER it has been converted into a torch.tensor?

Hello,

I want to apply a custom transform for computer vision.

My transform runs on the GPU on a batch, and I want it to be the LAST transform that gets applied. But I want it to be applied even after the TensorImage has been converted into just a tensor (with shape say (64,3,128,128) if images are 128x128 and I have a batch size of 64). I noticed the TensorImage has shape (64,128,128,3) and is of type byte, instead of what I want, which is why I can’t apply my tfm now.

Can I do this?

I can work around this by wrapping my model into another model (and making the tfm a part of the model), but then I don’t get a decode in order to use show_batch correctly :frowning:

EDIT: Also, it seems my images are getting normalized without my permission. Why, though? That is, dls.one_batch() returns an image with range -2.6 to 2.6 more or less. But I NEVER asked it to normalize. More importantly, how can I stop this?

You should be able to do this by subclassing from RandTransform and using def encodes(self, o:torch.Tensor) return custom_transform(x)

It should not be happening at all. Will be easier to help out here if you share the code for how you’re constructing your DataLoader.

Thank you.

I load images like this:

import fastai.vision.all as fv

def load_data(path,size,bs):
    dblock = fd.DataBlock(blocks    = (fv.ImageBlock, fv.ImageBlock),
                   get_items = fv.get_image_files,
                   get_y     = lambda x: x,
                   splitter  = fv.RandomSplitter(valid_pct=0.1),
                   item_tfms = fv.Resize(size),
                   batch_tfms= fv.aug_transforms()
                  )
    return dblock.dataloaders(path,bs=bs)
data = load_data("images",128,64)
x,y = data.one_batch()

then print x.max() or y.max()

Wait, I found it!

The problem is that unet_learner has an option called “normalize” that normalizes x and y for some reason.

Thanks!

1 Like