Get predictions using GPU

Hi all, I am calculating predictions with a large batch size and I want to use a GPU for that. In the moment, I have something that looks like:

from fastai.learner import load_learner

device = torch.device("cuda")
learner = load_learner(model_path)
dl = learner.dls.test_dl(predict_df, bs=100000, device=device)
predictions = learner.get_preds(dl=dl)[0].squeeze()

When I trained the model, I did it using gpu and a batch size of 2048, the learner was saved using export. Seeing the fastai docs, the method load_model has the parameter location but no load_learner. I would like to know why is that so?

With the code as it’s now, the learner is not loaded in the gpu and the predictions are not calculated using the GPU but the CPU. How should I save and load the model to do that? Should I use save_model and load_learner instead? Can I use test_dl if that’s the case?

Thanks a lot for your advice!

Specify device=“cuda” to load_learner

Thanks for replying Zachary.
load_learner doesn’t have a device parameter. If I add it, I will get this error load_learner() got an unexpected keyword argument 'device'.

@symeneses Use the following:

learner = load_learner(model_path, cpu=False)

By seeing ( _set_device), I realized what was the issue. Setting the default device, the predictions are calculated in the chosen device. In load learner the param cpu is False as @sinhak suggested.

Here the complete code.

from fastai.basics import default_device
from fastai.learner import load_learner

use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
default_device(use_cuda=use_cuda)

learner = load_learner(model_path, cpu=not use_cuda)
dl = learner.dls.test_dl(predict_df, bs=100000, device=device)
predictions = learner.get_preds(dl=dl)[0].squeeze()