I’m training a UNet model for image segmentation of satellite imagery. I trained the model with patches of size (64, 64).
Then I want to predict on a new single image of size (717, 780). The thing I don’t understand is that when i’m loading the model with :
What are the transforms you applied during Datasets/Blocks creation at training time?
Maybe you have a Resize there which gets automatically applied at inference time too.
And it will be. When you export a model it keeps track of what transforms were trained on the validation set and applies them. You need to manually go in and adjust the internal transforms stored away in the Learner. I’m not 100% certain where those are in v1, I only know where those are in fastai v2