Un-Normalize my Normalized Values?

I trained thousands of neural networks with varying hyperparameters to find the best ones.

My issue is this:
I need to access the values in my dataloader without the normalization that was applied. I’m having a tough time with dir(MyDataloader) to hunt down if the normalizations were stored anywhere.

In the code, I found how to decode the normalizations which is essentially the inverse of the normalization encoding operation. Is there a way to call this on my data?

Otherwise, I have dls.means and dls.stds. After looking at the Normalize transform, it appears I can Un-Normalize with
OriginalValue = ((myNormalizedValue * CorrespondingSTD)) + CorrespondingMean)

Any thoughts?

Normalize function code is below:

@docs
class Normalize(DisplayedTransform):
"Normalize/denorm batch of TensorImage"
parameters,order = L(‘mean’, ‘std’),99
def init(self, mean=None, std=None, axes=(0,2,3)): store_attr()

@classmethod
def from_stats(cls, mean, std, dim=1, ndim=4, cuda=True): return cls(*broadcast_vec(dim, ndim, mean, std, cuda=cuda))

def setups(self, dl:DataLoader):
    if self.mean is None or self.std is None:
        x,*_ = dl.one_batch()
        self.mean,self.std = x.mean(self.axes, keepdim=True),x.std(self.axes, keepdim=True)+1e-7

def encodes(self, x:TensorImage): return (x-self.mean) / self.std
def decodes(self, x:TensorImage):
    f = to_cpu if x.device.type=='cpu' else noop
    return (x*f(self.std) + f(self.mean))

_docs=dict(encodes="Normalize batch", decodes="Denormalize batch")