Yes, If I do that or dkpg -l | grep cuda I get an empty list
What gets installed with every nvidia-driver seems to be some kind of runtime and the binaries/usr/bin/nvidia-cuda-mps-control
and nvidia-cuda-mps-server
http://manpages.ubuntu.com/manpages/bionic/en/man1/nvidia-cuda-mps-control.1.html
Those seem to manage the cuda part of the gpu. Then when I use conda to set up an environment, every version of e.g. pytorch comes with different cuda and cudnn packages. I seem to have the cuda toolkits for 8.0 and 9.0 running in different conda envs.
Those all show up if you do locate cuda
.
Now there still is a dependency between the installed driver / cuda runtime and the conda packages. So I have just tried to install pytorch 0.4 with cuda 9.2 enabled, but that gives me a False for .cuda.is_available. That seems to be because the minimum driver version (and I assume the bundled cuda runtime) is the nvidia-396 driver for 9.2, so with the nvidia-390 that I have installed I can only use up to cuda 9.1.
I regoogled some stuff, this article is what I think made me even test this out, before I always just assumed I absolutely had to install the cuda and cudnn packages from nvidia (with all the hassle that comes with it)
from this:
The NVIDIA display drivers come with a CUDA runtime library. Thatâs so you can run CUDA accelerated programs without having CUDA installed on your system. Thatâs usually just what you want.
Didnât know that before. But yes, that is just what I want