Inference for single image with different size

Hello everybody,

I’m training a UNet model for image segmentation of satellite imagery. I trained the model with patches of size (64, 64).
Then I want to predict on a new single image of size (717, 780). The thing I don’t understand is that when i’m loading the model with :

testImg = open_image(pathToImgTest)
results = learn.predict(testImg)

torch.Size([64, 64])

My result is taking the same size of the training patches and not from the image i’m trying to get a prediction…

I’m missing something in the use of my train model. So if you have any suggestion I will be glad to hear from you !

1 Like

What are the transforms you applied during Datasets/Blocks creation at training time?
Maybe you have a Resize there which gets automatically applied at inference time too.

Here the way i’m getting my dataset for training, I found it on the forum :

class SegLabelListCustom(SegmentationLabelList):
def open(self, fn): return open_mask(fn, div=True)

class SegItemListCustom(SegmentationItemList):
    _label_cls = SegLabelListCustom

src = (SegItemListCustom.from_folder(path)
      .split_by_folder(train = 'Train', valid = 'Test')
      .label_from_func(funcTrain, classes = codes)) 

And then for creating the databunch i’m doing :

databunch = (src.transform(get_transforms(flip_vert = True), size = size, tfm_y = True)
    .databunch(bs = 4, num_workers=0)

So the learn.predict() is applying the transforms from the databunch ?

That is my understanding.
You don’t have any Resize though :slight_smile: .
I will have to take a closer look.

I think the Resize is done in get_transforms here, when you pass in the size

I loaded my trained model by using :

learn = load_learner(path, 'trainedModel.pkl')

My learn variable is without any data inside, and when i’m doing :


My output is still going back to a size of 64, 64.

If you have any update I will be glad to hear from you :slight_smile:

And it will be. When you export a model it keeps track of what transforms were trained on the validation set and applies them. You need to manually go in and adjust the internal transforms stored away in the Learner. I’m not 100% certain where those are in v1, I only know where those are in fastai v2