Why is predicting with a text_classifier_learner so slow?

Hello,

I build a sentiment analysis model using this tutorial.

I later export it and saved it for later use.

It takes 30 seconds to make a prediction for 100 examples. I am using a free version of Google Colab.

results=[predictor.predict(c) for c in df.iloc[100:200,-1]]

My input are just simple texts. Would it be faster to transform all of them into a TextDataBunch and than call predict on all of thea?

I don’t know how to that either, especially since I am adding the examples dynamically.

First off, ensure you have set your runtime accelerator to GPU. Otherwise your program would be running on the CPU, which can be excruciatingly slow.

learn.predict is meant for inference on single data points, and in your case, predicting on batches would be much faster. In fastai, you can add a test set via the DataBlocks API, and get its predictions using learn.get_preds(DatasetType.Test). Alternatively, you could turn your test set into a DataLoader object, and run learn.validate(test_dl).

P.S: I strongly suggest you move on to fastai2 as it is easier to use, well-documented, and you can get more help on the forums.

Good luck!

2 Likes