U-Net Data Loader

How can input images and output mask images for a U-NET be differently transformed with the fast.ai data loader.
I’m currently implementing a U-NET for semantic classification with fast.ai / pytorch V1. The dataset contains input images and mask images of the same size.

However the network model has an input size of 512x512 and a mask output size 450x450.

“Transforms from model” (tfms) transforms both input and output images to the same size. Which transformer can I use to transform the output images to a different size.

#data loader from carvana example

tfms = tfms_from_model(resnet34, sz, crop_type=CropType.NO, tfm_y=TfmType.CLASS, aug_tfms=aug_tfms)

datasets = ImageData.get_ds(MatchedFilesDataset, (trn_x,trn_y), (val_x,val_y), tfms, path=PATH)

md = ImageData(PATH, datasets, bs, num_workers=16, classes=None)

denorm = md.trn_ds.denorm