Pytorch GPU utilization low and CPU utilization high?

I have created the fast ai environment on my windows 10 laptop and everything installed properly. I was running the lesson-1.ipynb and found that my gpu utilization is low (about 8-10%) where as the CPU utilization goes even up to 75%. I don’t understand why is this happening.

torch.cuda.is_available() shows True when executed and torch.backends.cudnn.enabled also shows True

I looked into my task manager under the Performance section and found that dedicated gpu memory for pytorch is showing 1gb / 4 gb (I have a gtx 1050ti laptop), but I also have a tensorflow-gpu environment and when I run any model on that written in tensorflow / Keras ,the gpu utilization is 30-40% and dedicated memory is 3gb / 4gb. Is pytorch working normally or is somehing wrong?

Thanks in advance.

Can you run torch.cuda.is_available()?
If it is can you see what device you’re currently using torch.cuda.get_device_name(torch.cuda.current_device())

Problem automatically got solved after a reboot (didn’t understand how) and it shows my gpu name ‘GeForce GTX 1050 Ti’. Thanks!

The image augmentation used in lesson 1 is very CPU heavy. That’s what the CPU usage is.

Since you have GPU memory to spare, try increase the batch size and see if the GPU usage goes up accordingly.

1 Like

I am running the same windows version and GPu model…when trying to run the lesson3-planets.ipynb, the GPU utilization is very less(utilization~5%; dedicated GPU memory - 2.8/4gb)…cpu utilization is ~30%…each epoch takes around 7min to run! while the notebook in the lecture ran 5 epochs in under 4min cant understand the underlying issue here

Check the status of CUDA_LAUNCH_BLOCKING typing os.environ['CUDA_LAUNCH_BLOCKING'] in jupyter cell. If the output is 1 then turn it off by executing below command.

os.environ['CUDA_LAUNCH_BLOCKING'] = "0"

This could be one of the reasons for high CPU usage.