Callback & Inference

This seems so simple yet I can’t quite figure out what the best solution would be…

Here is what happened. I created a callback that normalizes input images (very similar to norm_imagenette which normalizes a batch during begin_batch phase). The learner trains great and I saved the model’s state_dict. Then when it was time to do an inference using the saved model, it was not performing as I would have expected. After hours of poking around, I realized the obvious. The input to the model did not get normalized by the callback like the training/validation set did because the model’s forward function was called directly with a user’s input image.

Now I am contemplating whether I should have put the normalization mechanism in the model’s forward method (that way, the input is always normalized). If this was discussed in the class, I totally missed it and I feel silly for it.

Any advices?

Fast.ai 1.0 add the normalization step to the transformations (both for train and valid)

Maybe this can help:

It did work great for training and validation. The issue was when it comes time to deploy the trained model to production. The normalization step is no longer there because there is no learner or databunch.

If you use the model as standard pytorch model you need to normalize the input manually (subtract mean and divide for stddev)

All that makes sense. I guess my question is more general.

If there are things that need to be executed during the inference time as well, why not just put it in a model rather than making it a callback. I’m not sure if I’m explaining myself well, but I am wondering if there was a mechanism I missed that does normalization during the inference if that was done via a Callback during the training time.

Yes, if you export your learner. It normally saves the callbacks with its state.

2 Likes

Makes sense. I was thinking a lot of the callbacks I added aren’t needed for inferences so no point in loading the entire learner for inference, but maybe I’ll keep track of which callbacks need to be run all the time and which are just during the training. That way, I can continue to do what we did in fastai v1 https://docs.fast.ai/tutorial.inference.html.

Thanks :slight_smile:

1 Like

Hi @hiromi @sgugger
I am also struggling here when I try to match the input tensors with a trained model and when loading at inference time on AWS lambda.

I think the problem is at transform stage:
This is the training code for transformation.
sd = il.split_by_rand_pct(0.1) # split data
ll = sd.label_from_func(get_label, label_delim = “|”)
data = ll.transform(size=128).databunch(bs = 16).normalize(imagenet_stats)

Following is the inference code:
def pil_loader_resize(path):
with open(path, ‘rb’) as f:
size = (128,128)
img = PIL.Image.open(f)
img.thumbnail(size,PIL.Image.ANTIALIAS)
return img.convert(‘RGB’)

xresize = pil_loader_resize(img_path)

data_norm = transforms.Normalize( # Following are imagenet stats
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
data_norm(xresize)

response = predict(xresize, model)

def pil_loader_resize(path):
with open(path, ‘rb’) as f:
size = (128,128)
img = PIL.Image.open(f)
img.thumbnail(size,PIL.Image.ANTIALIAS)
return img.convert(‘RGB’)

The prediction array also do not match after I get the call from preds = learn.predict(x) during training with the same image.
Any help will be useful. Thanks
Ritesh