Hi all,
I have trained a model for the camvid dataset (as part of lesson 3) which I exported to my Google Drive.
Then I loaded it with learn.load
and executed the following:
pred_mask, lbl, probs = learn.predict(img)
.
The size of pred_mask
is (720, 960), but my img
has size (720, 1280).
Now when I try to overlay my image with the predicted mask, it does not fit.
I tried to achieve the overlay with the code from the fast ai docs:
_,axs = plt.subplots(1,3, figsize=(8,4))
img.show(ax=axs[0], title='no mask')
img.show(ax=axs[1], y=pred_mask, title='masked')
pred_mask.show(ax=axs[2], title='mask only', alpha=1.)
The results that I get is this:
But now I want to resize the mask, so I tried to do this:
pred_mask.resize((1.0,img.size[0],img.size[1]))
After doing so, I got this error: RuntimeError: grid_sampler(): expected input and grid to have same dtype, but input has long and grid has float
So I thought maybe I am doing this wrong and I should apply a transforms, so I did:
tfms = get_transforms()
pred_mask.apply_tfms(tfms[0],size=(1, img.size[0], img.size[1]))
This gives the same error…
The only related post I found was this:
So any thoughts on what I am doing wrong, or how to solve this?
Thanks,
Vincent