Hi all, I am calculating predictions with a large batch size and I want to use a GPU for that. In the moment, I have something that looks like:
from fastai.learner import load_learner device = torch.device("cuda") learner = load_learner(model_path) dl = learner.dls.test_dl(predict_df, bs=100000, device=device) predictions = learner.get_preds(dl=dl).squeeze()
When I trained the model, I did it using gpu and a batch size of 2048, the learner was saved using
export. Seeing the fastai docs, the method
load_model has the parameter location but no
load_learner. I would like to know why is that so?
With the code as it’s now, the learner is not loaded in the gpu and the predictions are not calculated using the GPU but the CPU. How should I save and load the model to do that? Should I use
load_learner instead? Can I use
test_dl if that’s the case?
Thanks a lot for your advice!