I have a model that does the following:
- Takes input image
- Applies the preprocessing transforms pipeline including resizing, normalization, etc…
- Performs inference and returns the resized image (
with_input=True) and the predicted mask on the resized image
- Use the predicted mask to perform some calculations of color using the color sampled from the resized input image
- Use the predicted mask to perform some distance measurements in pixel space using points calculated from the mask regions
Now we are validating these model created measurements against manually collected measurements. The manually collected measurements we collected in photoshop using the original input image.
My question is, does anyone know how I can obtain the inverse function between the resized image and the original image so that I can transform the predicted mask back to the original size? Is there an existing facility in fastai and pytorch to accomplish this inverse transform step? Can some one point me to that?