CPU inference - model.predict() runs 6x slower in Windows than Linux

Hi all,

I trained a cnn_learner model on my GPU and exported it to use for prediction later. I am able to reload the model and call predict on images to do inference. I’m also able to load the model to the CPU and run inference with the CPU.

My issue is that the predict function runs about 5-6 times faster in Linux than Windows. In Linux, the model can predict ~1.4 imgs/sec, but the same model run on Windows is doing ~0.25 imgs/sec. This is a problem for me because I have a potential application that needs to be run on the CPU of a Windows machine.

A few additional pieces of information:
-I have verified that the bottleneck is the model.predict() function. It isn’t loading the images from disk or transforming them.
-All of my CPU cores are firing during prediction
-I’m using Fastai v1.0.55, Python 3.7, PyTorch 1.1.0, Windows 10 and Ubuntu WSL (also tested on dedicated Ubuntu 18.04 partition)

Has anyone come across this issue before or have any insights/suggestions? From what I can tell this hasn’t been discussed here previously. I’m aware Fast.ai is developed for Linux and Windows is “use at own risk”, but wanted to figure out if this is a known issue.

Thanks so much!

3 Likes