Inverting image preprocessing pipeline on predicted segmentation mask

I have a model that does the following:

  1. Takes input image
  2. Applies the preprocessing transforms pipeline including resizing, normalization, etc…
  3. Performs inference and returns the resized image (with_input=True) and the predicted mask on the resized image
  4. Use the predicted mask to perform some calculations of color using the color sampled from the resized input image
  5. Use the predicted mask to perform some distance measurements in pixel space using points calculated from the mask regions

Now we are validating these model created measurements against manually collected measurements. The manually collected measurements we collected in photoshop using the original input image.

My question is, does anyone know how I can obtain the inverse function between the resized image and the original image so that I can transform the predicted mask back to the original size? Is there an existing facility in fastai and pytorch to accomplish this inverse transform step? Can some one point me to that?

Thanks!