Fastai is not Using GPU on local Windows Machine

When I run the command :

python -c 'import fastai.utils.collect_env; fastai.utils.collect_env.show_install(1)'
=== Software === 
python        : 3.6.8
fastai        : 1.0.50.post1
fastprogress  : 0.1.20
torch         : 1.0.1
torch cuda    : 10.0 / is available
torch cudnn   : 7401 / is enabled

=== Hardware === 
torch devices : 1
  - gpu0      : GeForce GTX 1050 Ti

=== Environment === 
platform      : Windows-10-10.0.17134-SP0
conda env     : ptorch
python        : C:\Users\Pawan\Anaconda3\envs\ptorch\python.exe
sys.path      : 
C:\Users\Pawan\Anaconda3\envs\ptorch\python36.zip
C:\Users\Pawan\Anaconda3\envs\ptorch\DLLs
C:\Users\Pawan\Anaconda3\envs\ptorch\lib
C:\Users\Pawan\Anaconda3\envs\ptorch
C:\Users\Pawan\Anaconda3\envs\ptorch\lib\site-packages
C:\Users\Pawan\Anaconda3\envs\ptorch\lib\site-packages\IPython\extensions
no nvidia-smi is found

When I run the code, it is utilizing the CPU only.

I’m running into exactly the same problem when trying to work through lesson 1: everything seems to be working except that only the CPU is utilized and training is therefore super slow.

Here’s the output when I run the command:

=== Software ===
python        : 3.7.3
fastai        : 1.0.51
fastprogress  : 0.1.21
torch         : 1.0.1
torch cuda    : 10.0 / is available
torch cudnn   : 7401 / is enabled

=== Hardware ===
torch devices : 1
  - gpu0      : GeForce GTX 1060 6GB

=== Environment ===
platform      : Windows-10-10.0.17134-SP0
conda env     : fastai_v1
python        : C:\Users\Patrick\Anaconda3\envs\fastai_v1\python.exe
sys.path      :
C:\Users\Patrick\Anaconda3\envs\fastai_v1\python37.zip
C:\Users\Patrick\Anaconda3\envs\fastai_v1\DLLs
C:\Users\Patrick\Anaconda3\envs\fastai_v1\lib
C:\Users\Patrick\Anaconda3\envs\fastai_v1
C:\Users\Patrick\Anaconda3\envs\fastai_v1\lib\site-packages
no nvidia-smi is found

I already tried reinstalling the respective modules but to no awail and I’m about to install Ubuntu to see if I get it working there.
Since you posted the problem 2 weeks ago: did you manage to find a solution yet?

hello @12patman34
How many workers (num_workers) you are using?
Set it to 3 or 4.

Thanks for the suggestion - sadly it didn’t work.
And correct me if I’m wrong, but doesn’t num_workers mainly affect the CPU?

What actually ended up working for me was to use Ubuntu. There it’s working like a charm, no GPU problems whatsoever :slight_smile:

num_workers is number subprocesses to use for data loading. 0 means that the data will be loaded in the main process.
Increasing the num_workers works for me. And wrapping the code in “if name == ‘main’:” and “main”.

But… doesn’t this only mean we do not have the nvidia-smi monitor utility up and running? The very slow training time from lesson 1 benchmark (my GTX 1070 runs 1 cycle in 1:50 instead of 0:20s-0.30s) maybe happens because we have a .jpeg bottleneck and cannot install pillow-simd instead of pillow, as reported for example from How to install precompiled pillow-simd into conda env ?

1 Like

You exactly described my problem (my GTX 1070 runs 1 cycle in 1:50 instead of 0:20s-0.30s). I have installed pillow-simd but nothing has changed. Have you solved it?

2 Likes

I’m having the exact same problem (GTX 1080). Has anyone found a solution?