Part 1: What version of CUDA should I be using?


I have an Ubuntu 16.04 box with the latest version of Anacoda installed. I’ve created and activated a Python 2.7 environment. I’ve installed Keras v1.1.1. ($ conda install keras==1.1.1). My GPU is a Nvidia GTX 1080 ti.

I installed CUDA 8.0 because I was previously doing deep learning work with Python 3.6 & Keras 2.0.

This is what I get when I try to import Theano (this also happens when running the Lesson1 notebook):

$ KERAS_BACKEND=theano python -c "from keras import backend"
Using Theano backend.
WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be removed in the next release (v0.10).  Please switch to the gpuarray backend. You can get more information about how to switch at this URL:

Using gpu device 0: GeForce GTX 1080 Ti (CNMeM is disabled, cuDNN 6021)
/home/reed/anaconda3/envs/python27/lib/python2.7/site-packages/theano/sandbox/cuda/ UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5.1.

So no GPU for me. Do I need to downgrade to a previous version of CUDA?


My CUDA setup:

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61

$ nvidia-smi
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  GeForce GTX 108...  Off  | 0000:01:00.0      On |                  N/A |
|  0%   48C    P2    62W / 250W |   1497MiB / 11169MiB |      0%      Default |

You misread it. you are using GPU. You just have CNMEM disabled. You can enable it by adding to your .theanorc file:


1 Like

I’ve made that change - thank you.

But if I run one of the mnist examples that ships with Keras it takes about 20x as long as it does when I run it under my Python 3.6 & Keras 2.0 environment. Additionally, nvidia-smi never reports any gpu utilization in the Python 2.7 environment so there is something else going on.

I have a similar setup and got it working with CUDA8.0 but CuDNN 5.1.x instead of CuDNN 6 as indicated in your warning message

Thank you @ishus27 - I ended up having to re-install CUDA from scratch.