pyTorch not working with an old NVidia card

(M. Mansour) #1

I am trying to set up the tutorials locally.
OS: Ubuntu 16.04
GPU: GeForce GTX 760

I made sure that the GPU supports CUDA; as it actually has over 1000 CUDA cores as listed here.
I have also ensured that both CUDA and CuDNN are installed properly as both of these commands return “True”:


Nevertheless, when I execute the first main block of code in lesson 1 that is:

data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True), 2)

I get the following warning:

Found GPU0 GeForce GTX 760 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))

Then a runtime error appears:

RuntimeError: cuda runtime error (48) : no kernel image is available for execution on the device at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/generic/

Is it possible to set up pyTorch with this GPU?
Any help is much appreciated.

(M. Mansour) #2

Solved by compiling pyTorch from source. Here is how to do so in details:
1- Activate the virtual env:

source activate fastai

2- Uninstall pyTorch

conda uninstall pytorch

3- Clone pyTorch repo:

git clone --recursive

4- Checkout to the latest branch (so that we install pytorch 0.3.1)

cd pytorch
git checkout v0.3.1

5- Sort out dependencies (I only needed cmake):

git submodule update --init
sudo apt install cmake

6- Install, this is were compiling takes place (took nearly 30 minutes):

python install

(Bharadwaj Srigiriraju) #3

Just curious, is your pytorch working with your old nvidia card… did you check by running code on GPU? I have a machine with an old 940MX card with 5.0 compute power and would like to use it for prototyping.

AFAIK, v0.3.1 dropped support for old cards… correct me if I am wrong…

It would help if you let us know which CUDA version and CUDNN version you had installed at the time of building pytorch. Thanks!

P.S. I have been able to get 0.3.0 working with CUDA 9.0 on my card, just wondering if there’s a way to get 0.3.1 working.

(Parth Rohilla) #4

could you please help by explaining a bit more on how to compile pytorch from source.
i am new to deep learning and only familiar with anaconda prompt commands.
i shall be grateful.

(Parth Rohilla) #5

Did you solve the error ?

(M. Mansour) #6

Yes, pyTorch 0.3.1 is working with my old GPU (GTX 760), just follow my steps above.
For Cuda I used this package:
And for CuDNN I used:

(M. Mansour) #7

Did you follow the steps in my first reply, I explained in details how to solve this? If yes, at what point did you face a problem?

(M. Mansour) #8

Yes I did, follow the steps in my first reply if you are facing the same problem.

(Parth Rohilla) #9

i am getting stuck in the last 6th step.It shows is not found ,however it is there.(i am using windows 10)
Will installing cpu version of pytorch atleast run the code without errors?

(M. Mansour) #10

Are you using bash inside windows? Could you ls the directory you are standing on and paste the output, also could you paste the error message you are getting ?

(Parth Rohilla) #11

$ python install
bash: python: command not found
(when using git bash)

running install
running build_deps
error: [WinError 2] The system cannot find the file specified
(when using anaconda prompt)