Very slow inference using get_preds in fastai2

When using a resnet34 architecture it takes me around 1h to classify 1820 pictures which make a processing time of 2s per picture. This seems very slow to me. What would be ways to make this faster?

test_df=pd.read_csv(data_drive/"test.csv") # 1820 pictures
dl_test = learn_inf.dls.test_dl(test_df)
preds , _ = learn_inf.get_preds(dl=dl_test)
3 Likes

Are you sure you’re using the gpu? Check dls.device and check that your model is CUDA enabled too

1 Like

I have not checked and indeed i am not using the GPU.

dl_test.device
device(type='cpu')

However I am not sure how to check what device is the learner on

  • How do I check what device is the learber on?
  • How can I move everything to the GPU?

Many Thanks!

learn.model = learn.model.cuda() will make it GPU enabled.

learn.dls.to(‘cuda’) will push the DataLoaders to the GPU.

I also have this page bookmarked at this point :wink: https://discuss.pytorch.org/t/how-to-check-if-model-is-on-cuda/180

6 Likes

Thank you, I have moved to cuda and the running time is now 5min instead of 60min!

2 Likes

Or more easily, learn.dls.cuda() :wink:

3 Likes

for dl_test from dl_test = learn_inf.dls.test_dl(test_df)
dl_test.cuda()
does not work, however: dl_test.to('cuda') works :slight_smile:

1 Like

Yes DataLoader does not have a cuda method, only DataLoaders.

You should also be able to say:

dl_test = learn_inf.dls.test_dl(test_df, device='cuda')

or something of the like, to do it in one line.

4 Likes