Using GPU for Inference locally

Hi,

I have built a model for detecting defects in a finished product. Here the approach is to localize the object and take multiple overlapping crops of the image and then classify each of these cropped images as ‘Good’ or ‘Defective’. I need to deploy the system to a linux machine locally.

As I need to get inferences for multiple image-crops simultaneously I want to use a GPU. I have the following questions in this regards:

 1. How can I do learn.predict for multiple images simultaneously using GPU?

 2. As I am only using the GPU for inference, what is the most cost effective solution in the form of  a GPU box (or otherwise) on which I can install Fastai2 and deploy the model?

Regards

1 Like
  1. You don’t need to do something different to do inference on the GPU. Most of us need to do something different to force it onto the CPU when we deploy.
  2. No idea, sorry.

Hi,

  1. Can the learn.preict() accept a batch instead of one image? For this to happen do we need to setup a dataloader without augmentations?

You’ll have to create a data loader.

dl = learn.dls.test_dl(files)
preds = learn.get_preds(dl=dl)

Florian

2 Likes