Having trouble finding correct image resize transform

I am trying to overlay the segmentation mask produced from my unet_learner on the input image. The mask is created with no problems calling learn.predict, and has the right proportions, and it is smaller as compared to the input image.

img.img = PILImage.create(fp)
print(type(img.img))
print(img.img)
plt.imshow(np.array(img.img))

# <class 'fastai.vision.core.PILImage'>
# PILImage mode=RGB size=640x480
# <matplotlib.image.AxesImage at 0x7f82fc5bdef0>

Screen Shot 2020-11-24 at 6.58.04 PM

profile.model.learner = load_learner(mod_fp)
print(type(profile.model.learner))
pred = profile.model.learner.predict(np.array(base_img))[0]
print(type(pred))
print(pred.shape)
pred.show()

# <class 'fastai.learner.Learner'>
# <class 'fastai.torch_core.TensorMask'>
# torch.Size([224, 224])
# <matplotlib.axes._subplots.AxesSubplot at 0x7f83741a7ef0>

Screen Shot 2020-11-24 at 6.58.11 PM

base_img = img.img
plt.imshow(np.array(base_img))
plt.imshow(np.array(pred), alpha=0.25)

Screen Shot 2020-11-24 at 6.58.17 PM

base_img = img.img.resize((224,224))
plt.imshow(np.array(base_img))
plt.imshow(np.array(pred), alpha=0.25)

Screen Shot 2020-11-24 at 6.58.54 PM
Not quite right

profile.model.learner = load_learner(mod_fp)
print(type(profile.model.learner))
pred = profile.model.learner.predict(np.array(base_img))[0]
print(type(pred))
print(pred.shape)
pred.show()

# <class 'fastai.learner.Learner'>
# <class 'fastai.torch_core.TensorMask'>
# torch.Size([224, 224])
# <matplotlib.axes._subplots.AxesSubplot at 0x7f83b82ff438>

Screen Shot 2020-11-24 at 7.00.24 PM
prediction based on the resized image

base_img = img.img.resize((224,224))
plt.imshow(np.array(base_img))
plt.imshow(np.array(pred), alpha=0.25)

Screen Shot 2020-11-24 at 7.00.30 PM

So, the matching input image is the one I resized before running learn.predict

I am a little stuck here if someone could help me find the image transform pipeline to run. I noticed there used to be a function called tfms_from_model that I can’t seem to find.

I am running fastai.__version__ == '2.0.15'

For me, since I was interested in getting the image as transformed, this was solved by outputing that from the learn.predict call using the with_input=True which returns the decoded image along with the predicted mask.
learner.predict(np.array(img.img), with_input=True)