Making predictions for a tensor

Hi,
I want to get a prediction for a whole batch (tensor, that I have in memory). The images are created on the fly. What is the right way to do this? (in my cases it’s during inference)
Lets say img is a correctly loaded fastai.vision.image.Image . I did try the following:

# some batch
xb = torch.stack([img.data for _ in range(5)])
# get preds
preds = learn.pred_batch(ds_type=DatasetType.Test, batch=(xb.cuda(), tensor(range(5))))

But this gives a slightly different result compared to:

torch.stack([learn.predict(img) for _ in range(5)])

Thanks

You forgot to normalize your batch, and I’m guessing that’s why you have the difference. You can do it with:

(xb,yb) = data.train_dl.tfms[0]((xb,yb))

(just check if data.train_dl.tfms[0] is the normalize transform but it should be).

2 Likes

Yeah, now it works. Thanks.

To add to this, if your image is in the form of a numpy array with shape (h, w, chan) you can use:

img = pil2tensor(img, np.float32)
img = Image(img)
pred = learn.predict(img)

This allows you iterate over a batch or queue in a loop

@sgugger What do we use in place of yb if we are normalizing at inference time? I would not have any labels in this scenario. My current code looks like this:

def inference(model_path, images):
    learn = load_learner('.', model_path)
    tensors = []

    for image in images:
        img = (image / 255.0)
        img = pil2tensor(img, np.float32)
        img = Image(img)
        img = img.data
        tensors.append(img)

    xb = torch.stack(tensors)
    outputs = learn.pred_batch(ds_type=DatasetType.Test, batch=(xb.cuda(), tensor(range(len(images)) )))
    outputs = outputs.numpy()
    outputs = np.squeeze(outputs)

    return outputs