Inference on Jupyter is way faster?

Hi,

So I’ve followed the tutorials and have built myself a image classification model built on resnet34 with fastai framework.

My program takes in a video stream, passes an image to the model and comes out with a 2 class prediction. When I run my program on a jupyter notebook installed locally, its inference time is fast at 40~60ms.

When I copy pasted the code and essentially turned it into a .py program and ran it using both VSCode and straight up command line, my inference speed went up t 300~500ms.

Does anyone know why it’s a lot faster in Jupyter? I don’t even know where to begin to look

The time method for both is:

        toc1 = time.perf_counter()
        pred_class,pred_idx,outputs = self.model.predict(img)
        toc2 = time.perf_counter()
        score = outputs.max()
        print(f"inference time is {toc2 - toc1:0.4f} seconds")

In both cases I load the model using:

data2 = ImageDataBunch.single_from_classes(path, classes, ds_tfms=get_transforms(), size=128).normalize(imagenet_stats)
learn = cnn_learner(data2, models.resnet34, metrics=error_rate)
learn.load('res34_stage2_1');

Hi Toma. IMHO, the first thing to check would be usage of the GPU.

Hi Pomo,

Both are placed on the CPU when inferencing.
Looking at my task manager, all 8 cores of my cpu are being utilized but I’ll double check if there’s any activity on the Gpu.

Thanks!

Compare the python version and package versions used in the jupyter kernel verse the command line. You can ensure the two are consistent by creating a python virtual environment, installing packages in that environment then creating a ipython kernel to use in jupyter. See https://ipython.readthedocs.io/en/stable/install/kernel_install.html

Hi, Guys.

I got it to work. I’m not entirely sure how but I refactored my code so that I put the predict model in a function within a class and that seemed to speed things up quite a bit