I’m training a UNet model for image segmentation of satellite imagery. I trained the model with patches of size (64, 64).
Then I want to predict on a new single image of size (717, 780). The thing I don’t understand is that when i’m loading the model with :
learn.load('stage-1') testImg = open_image(pathToImgTest) results = learn.predict(testImg) results.size torch.Size([64, 64])
My result is taking the same size of the training patches and not from the image i’m trying to get a prediction…
I’m missing something in the use of my train model. So if you have any suggestion I will be glad to hear from you !