Torch.cuda.is_available() returns False

(hkpoint) #21

Thanks, PeterR! It works for me now.

(Peter Rinaudo) #22

so glad this info helped you huangkun527!
good luck with it all.

(Manjeet Mehta) #23

I faced this issue in local installation and resolved by following these steps:-
1)Installed the cudatoolkit from nvidia’s site
2)Then followed the post installation steps like setting the path variable from
installation guide
and after completing the post installation steps it returned True.

(Carlos Vouking) #24

If you have the right(compatible) Cuda toolkit installed, try killing and restarting your kernel.
I run my models in Jupyter Lab on a GTX 1050ti 4gb laptop with 16gb of RAM (I am on a windows 10, ubuntu 16.04 dual system). Sometimes, When closing the laptop without a proper shutdown, then rerunning the notebook from where I left leads me to this issue of torch.cuda.is_available() to False. If it does not help, a second option would be to restart your system.

Hope this helps.


Hey guys. I hope someone can help me. I’m on Windows 10 on a laptop that has a GTX 1060 NVIDIA GPU. I am getting False from torch.cuda.is_available(), but then True from torch.backends.cudnn.enabled. I checked the anaconda environment and CUDA 9.0 is installed, so I am unsure of why torch.cuda.is_available() is returning false.
Any help is appreciated.
Thank you.

Additionally when I type torch.cuda.get_device_name(0) and try to run it, I get the error:

RuntimeError Traceback (most recent call last)
in ()
1 #torch.cuda.is_available()
----> 2 torch.cuda.get_device_name(0)

D:\Anaconda3\envs\fastai\lib\site-packages\torch\ in get_device_name(device)
272 “”"
273 if device >= 0:
–> 274 return torch._C._cuda_getDeviceName(device)

RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at torch/csrc/cuda/Module.cpp:131

Edit #2:

I did some driver updates and restarted my computer and it’s returning True now. Lol