Multi-GPUs inference new images

I have trained a classification network and exported it to inference new images. But the number of images I want to inference is about 70k. I used learn.get_preds() to deal with the images and it finished 70k images inference in 20 seconds.
However, I have 3 GPUs but I noticed that only one GPU used when inferencing. I want the inference speed faster by using all of the GPUs.
Is there any methods to let learn.get_preds() use more GPUs? Or some other ways to use multi-GPUs when inference a large number of images?

I think parallel() should do the trick if there isn’t an easier way.

You could split the inference images into an array of batches and do it that way.