Fastai text parallel processing

I have an NLP model trained with fastai. I am looking for ways to batch predict on data. Is there a way to parallelize the model loading such that it is loaded on multiple GPUs and so I can have a much bigger batch size when I call the get_preds method? I am using the load_learner method to load the model. I am not sure if there is a better way to do this.

1 Like