Creating a unet_learner using a dataloaders object transforms the training data, but not the validation data!

I have created the following Dataloaders object based on a custom PyTorch dataset class called QSM_2D_With_Seg:

train_ds = QSM_2D_With_Seg(train_samples)
valid_ds = QSM_2D_With_Seg(valid_samples)
dls = fastai.data.core.DataLoaders.from_dsets(train_ds, valid_ds, batch_size=8, device='cuda:0')

Viewing some of this, we can see the range of the data is between 0 and 1, with the mean at 0.5. I have performed this scaling myself, within the QSM_2D_With_Seg class:

batch = dls.train.one_batch()
x = batch[0][0][0].cpu() 
y = batch[1][0].cpu()
show_histogram(x, title="Input - After creating dataset", mask=y)

However, after creating a u-net learner based on ResNet, the range changes! Perhaps this is expected, because ResNet expects a certain normalization, and I have normalize=True when I create the learner. Note that the range changes from 0 to 1 to approximately -2 to +1.5:

learn = fastai.vision.learner.unet_learner(
    dls=dls,
    arch=fastai.vision.models.resnet34,
    n_out=2,
    loss_func=fastai.losses.CrossEntropyLossFlat(axis=1),
    normalize=True
)
batch = dls.train.one_batch()
x = batch[0][0][0].cpu() 
y = batch[1][0].cpu()
show_histogram(x, title="Input - After creating learner", mask=y)

But - the validation data has not changed!! I believe this is causing predictions to fail:

batch = dls.valid.one_batch()
x = batch[0][0][0].cpu() 
y = batch[1][0].cpu()
show_histogram(x, title="Input - after training (from validation set)", mask=y)

See that the range remains between 0 and 1, unlike the training data which has been normalized for ResNet!

Why would the range of the training data be normalized, but not the validation data? How can I apply the same transformations to the validation data?

My full notebook is available here.