I’m in a similar place to many people on this thread. Having trained & saved a segmentation learner, I want to load it and use it for inference. I’m using learn.predict(img), and I don’t mind doing one at a time and resizing manually. The question I have is about normalization. I created by data with normalizing with imagenet_stats. Now, for inference, how can I apply the imagenet_stats normalization by hand to the images I feed to learn.predict?
As in the tutorial, data is created with:
data = (src.transform(get_transforms(), size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats)
)
I found the below function somewhere on the net which takes a more direct route for single images. I was also confused about if normalization was applied automatically at inference time. I did some tests and confirmed the result is the same as loading using imagedatabunch with ds_tfms=None. So it seems like the predict method will apply the same resize method and normalization as what was used during the build, but not the transforms, which makes sense.