GPU gets 6times slower when CPU is fully utilized

Hi all,

I am using GTX 1080 Ti GPU.Performance of it gets 6 times slower when my cpu is used by another process. It takes 3 sec/it when CPU is utilized where as it gets 2 it/sec when cpu is free. And I am using a common server in my organization, hence I dont have cpu usage under my control.Is there a work around where GPU doesnt wait for command from CPU to execute an iteration.

Thanks in advance for any guidance on this.

This is not possible. All you can do is limit the amount of work your CPU needs to do during training (for instance, use less data augmentation, resize images to whatever size you want to train on instead of resizing them on the fly, etc).

@radek Understood the point. But just curious to know why cant we perform those process that are happening in CPU, at GPU(if we instruct the code to do those steps in GPU). Is there any practical constraint for performing resizing, data augmentation etc on CPU.

GPUs have an architecture that is very different to CPUs. There is no easy way to take code that was compiled to run on the CPU or a VM (Java code for instance) and run it on a GPU.

You might also find this Medium post interesting - here the author faced a similar problem to you and wrote code to run data augmentation on the GPU using tensorflow.

Here is also another take on this by @machinethink on these very forums: Real time Data Augmentation on GPU?

Thanks @radek this is helpful