Get_preds() GPU utilization problem

Hey everyone,

We are experiencing a bottleneck with the get_preds() function while using a custom model for an image classification problem. In short, the GPU usage is too low.

Our workflow is the following:
imgs = get_image_files(image_dir)
learn = load_learner(f'{modelname}.pkl')
dl = learn.dls.test_dl(imgs, bs=batch_size, device=torch.device('cuda'))
predictions = learn.get_preds(dl=dl)

VRAM usage increases with the batch size, but GPU utilization is typically at 1%. Any suggestions about why this might be the case?

Thanks!

Try with ‘cpu=False’ in load_learner.

2 Likes

Oh, that did it, thanks! :slight_smile: