Should we .normalize() pictures before we make a prediction?

Hi,

since we use .normalize(imagenet_stats) on the data before training, shouldn’t we use something like

img = open_image(my_path).normalize()
learn.predict(img)

?

1 Like

Have the same question: especially around this use case, would normalization help with prediction for an image that is over or underexposed?

Pytorch does have a very elegant way to normalize to imagenet_stats but I can’t figure out how to use models (export.pkl or bestmodel.pth) saved with fastai to do a predict?? (i’ve tried learn.predict but it didn’t work with this normalized img)

# We can do all this preprocessing using a transform pipeline.
min_img_size = 224  # The min size, as noted in the PyTorch pretrained models doc, is 224 px.
transform_pipeline = transforms.Compose([transforms.Resize(min_img_size),
                                         transforms.ToTensor(),
                                         transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                                              std=[0.229, 0.224, 0.225])])
img = transform_pipeline(img)

# PyTorch pretrained models expect the Tensor dims to be (num input imgs, num color channels, height, width).
# Currently however, we have (num color channels, height, width); let's fix this by inserting a new axis.
img = img.unsqueeze(0)  # Insert the new axis at index 0 i.e. in front of the other axes/dims. 

# Now that we have preprocessed our img, we need to convert it into a 
# Variable; PyTorch models expect inputs to be Variables. A PyTorch Variable is a  
# wrapper around a PyTorch Tensor.
img = Variable(img)
1 Like

You want to create your model the same way you created your training model.

Then you load what you saved INTO the model. What you saved isn’t a whole model, but it’s actually the weights of this model.

So you create a model with an architecture, you train it, save the weights. You create a new model with the same architecte, load the weights into that model.

Does that makes sense ?

As for the code, I think a simple:
model = create_cnn(your parameters)
model.load(‘path/weights’)

will more or less do the trick :wink:

But your original question wasn’t about model weights, but single item inference and normalization. Did you find an answer yet?

I believe you want to normalize your new image with the mean and standard dev you used for normalizing your training set. If it’s imagenet_stats you used for normalizing, then use that again. Otherwise use your training data’s mean and std to normalize both your valid and your test data