pyTorch not working with an old NVidia card

After a long search i found the solution
The problem is that if the GPU is old the pytorch version before 0.4.0 doesn’t work, so you have to install the newest pytorch,
run this - conda install -c pytorch pytorch
After you install the newest pytorch you will face torch._C import * DLL load failed problem
to fix that run the ff code
set PYTORCH_BUILD_VERSION=0.4.1

conda install -c pytorch pytorch
set PYTORCH_BUILD_VERSION=0.4.1

Hello. I had faced done the fast.ai setup locally until I faced the same issue because I have NVidia GeForce 830M. I followed these steps to install pytorch from source. In the last step when I ran python setup.py install, it gave me an error.
Gcc runtime error 1. It said ‘CuDNN v5 found, but need at least CuDNN v6’. I am new to setting up. I will be grateful if anyone could help me through this.

The problem is that in my fastai environment there is a conda package of cudnn v7.2.1 which is confusing me.

Followed the steps. Why am i getting ‘CuDNN version not supported’ error after last step?

I have installed pytorch 0.4, the old gpu problem has gone but a new error came out:

UnicodeDecodeError Traceback (most recent call last)
in ()
----> 1 learn = ConvLearner.pretrained(arch, data, precompute=True)

~/fastai/courses/dl1/fastai/conv_learner.py in pretrained(cls, f, data, ps, xtra_fc, xtra_cut, custom_head, precompute, pretrained, **kwargs)
112 models = ConvnetBuilder(f, data.c, data.is_multi, data.is_reg,
113 ps=ps, xtra_fc=xtra_fc, xtra_cut=xtra_cut, custom_head=custom_head, pretrained=pretrained)
–> 114 return cls(data, models, precompute, **kwargs)
115
116 @classmethod

~/fastai/courses/dl1/fastai/conv_learner.py in init(self, data, models, precompute, **kwargs)
98 if hasattr(data, ‘is_multi’) and not data.is_reg and self.metrics is None:
99 self.metrics = [accuracy_thresh(0.5)] if self.data.is_multi else [accuracy]
–> 100 if precompute: self.save_fc1()
101 self.freeze()
102 self.precompute = precompute

~/fastai/courses/dl1/fastai/conv_learner.py in save_fc1(self)
177 m=self.models.top_model
178 if len(self.activations[0])!=len(self.data.trn_ds):
–> 179 predict_to_bcolz(m, self.data.fix_dl, act)
180 if len(self.activations[1])!=len(self.data.val_ds):
181 predict_to_bcolz(m, self.data.val_dl, val_act)

~/fastai/courses/dl1/fastai/model.py in predict_to_bcolz(m, gen, arr, workers)
15 lock=threading.Lock()
16 m.eval()
—> 17 for x,*_ in tqdm(gen):
18 y = to_np(m(VV(x)).data)
19 with lock:

~/anaconda3/envs/fastai/lib/python3.6/site-packages/tqdm/_tqdm.py in iter(self)
935 “”", fp_write=getattr(self.fp, ‘write’, sys.stderr.write))
936
–> 937 for obj in iterable:
938 yield obj
939 # Update and possibly print the progressbar.

~/fastai/courses/dl1/fastai/dataloader.py in iter(self)
86 # avoid py3.6 issue where queue is infinite and can result in memory exhaustion
87 for c in chunk_iter(iter(self.batch_sampler), self.num_workers*10):
—> 88 for batch in e.map(self.get_batch, c):
89 yield get_tensor(batch, self.pin_memory, self.half)
90

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/_base.py in result_iterator()
584 # Careful not to keep a reference to the popped future
585 if timeout is None:
–> 586 yield fs.pop().result()
587 else:
588 yield fs.pop().result(end_time - time.time())

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
423 raise CancelledError()
424 elif self._state == FINISHED:
–> 425 return self.__get_result()
426
427 self._condition.wait(timeout)

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
–> 384 raise self._exception
385 else:
386 return self._result

~/anaconda3/envs/fastai/lib/python3.6/concurrent/futures/thread.py in run(self)
54
55 try:
—> 56 result = self.fn(*self.args, **self.kwargs)
57 except BaseException as exc:
58 self.future.set_exception(exc)

~/fastai/courses/dl1/fastai/dataloader.py in get_batch(self, indices)
73
74 def get_batch(self, indices):
—> 75 res = self.np_collate([self.dataset[i] for i in indices])
76 if self.transpose: res[0] = res[0].T
77 if self.transpose_y: res[1] = res[1].T

~/fastai/courses/dl1/fastai/dataloader.py in (.0)
73
74 def get_batch(self, indices):
—> 75 res = self.np_collate([self.dataset[i] for i in indices])
76 if self.transpose: res[0] = res[0].T
77 if self.transpose_y: res[1] = res[1].T

~/fastai/courses/dl1/fastai/dataset.py in getitem(self, idx)
201 xs,ys = zip(*[self.get1item(i) for i in range(*idx.indices(self.n))])
202 return np.stack(xs),ys
–> 203 return self.get1item(idx)
204
205 def len(self): return self.n

~/fastai/courses/dl1/fastai/dataset.py in get1item(self, idx)
194
195 def get1item(self, idx):
–> 196 x,y = self.get_x(idx),self.get_y(idx)
197 return self.get(self.transform, x, y)
198

~/fastai/courses/dl1/fastai/dataset.py in get_x(self, i)
297 super().init(transform)
298 def get_sz(self): return self.transform.sz
–> 299 def get_x(self, i): return open_image(os.path.join(self.path, self.fnames[i]))
300 def get_n(self): return len(self.fnames)
301

~/fastai/courses/dl1/fastai/dataset.py in open_image(fn)
266 elif os.path.isdir(fn) and not str(fn).startswith(“http”):
267 raise OSError(‘Is a directory: {}’.format(fn))
–> 268 elif isdicom(fn):
269 slice = pydicom.read_file(fn)
270 if slice.PhotometricInterpretation.startswith(‘MONOCHROME’):

~/fastai/courses/dl1/fastai/dataset.py in isdicom(fn)
250 with open(fn) as fh:
251 fh.seek(0x80)
–> 252 return fh.read(4)==‘DICM’
253
254 def open_image(fn):

~/anaconda3/envs/fastai/lib/python3.6/codecs.py in decode(self, input, final)
319 # decode input (taking the buffer into account)
320 data = self.buffer + input
–> 321 (result, consumed) = self._buffer_decode(data, self.errors, final)
322 # keep undecoded input until the next call
323 self.buffer = data[consumed:]

UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xff in position 30: invalid start byte

I have also tried pytorch 0.3 and it gives the same UnicodeDecodeError…
Have you seen this error in the process or do you have any suggestions? Thank you.

Hey,
I got the same error and could not work on the course. Help needed!

OK, managed to fix it!
do a git pull. they’ve seem to have fixed the files.
Now I get the warning but at last I can run stuff.
(It’s horribly slow though)

Yes, works now. Thank you for reminding!

So I am facing a similar problem. My GTX 960M is unable to run because it’s too old. I have downloaded the zip file from the Drive. May I ask how to install via conda?

Is there a some kind of map what GPU is work with what version of pytorch?

Just for reference, but the solution already suggested above by some people to conda update pytorch=0.4.0 will solve the ‘too old’ GPU, at least it did for my GTX960, I then get an out-of-memory error, but that’s a separate post :slight_smile:

I compiled my Pytorch from source. Tedious, but didn’t give me any CUDA errors.

Hi All,
I have windows machine with old GPU 960M
i am having problem with running fastai lesson 1

I get following error
RuntimeError: cuda runtime error (48) : no kernel image is available for execution on the device at c:\anaconda2\conda-bld\pytorch_1519501749874\work\torch\lib\thc\generic/THCTensorMath.cu:15

I tried following steps:

  1. i dont know how to install PyTorch 0.4.0
  2. i tried this solution (conda install -c peterjc123 pytorch)

nothing is working for me. please help.
thanks

Has anybody came across this error while compiling Pytorch from source?

[  6%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/wrappers.pb.cc.o
[  6%] Linking CXX static library ../../../lib/libprotobuf.a
[  6%] Built target libprotobuf
make: *** [Makefile:141: all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn --use-qnnpack caffe2'

My GCC version is superior to 7 and I got some warnings.

My system is Fedora 29 and GPU Nvidia 940m.

I have CUDA and cudnn all ok and can run the CUDA examples.

For anyone running Fedora Linux I would reccomend you to read this articles:

This one covers installation of CUDA, Cudnn, Nccl and other libraries, setting up environment and variables and compiling Pytorch.

There is a very important point about GCC version: you need a specific (older) version of GCC and G++ covered by the article.

This will provide some support also:
http://gibbalog.blogspot.com/

I’m compiling PyTorch 0.4.1 with CUDA 10 for my mobile 940M… 69% done and crossing my fingers…

This is what ultimately fixed the message related to not working on my old card:
install pytorch cuda91 -c pytorch
I found it here Howto: installation on Windows

Right after that I found an Out of Memory on my card. Wrote about it here: https://medium.com/@andresesfm/fast-ai-deep-learning-class-notes-917a2e188e2d

1 Like

Thanks for the solution.

… and same here - Out of memory with my GTX750ti 2GB ram

probably time for a 1060 / 1070? or any work around?

Very well!

Finally I made it. To recap, when you see the warning of “PyTorch no longer supports this GPU because it is too old.”. Please perform the followings:

  1. Activate the virtual env fastai:

source activate fastai

  1. Uninstall pyTorch

conda uninstall pytorch

  1. Reinstall pyTorch

conda install -c pytorch pytorch

  1. Start the Jupyter Notebook and locate the line, for example in lesson1:

data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz), bs=16)

The extra parameter bs (batch size) let you specify a small process batch that can fit with your GPU memory. Here I take the value of 16 that is working well with my GTX750ti 2GB RAM.

It took me around 6 minutes to complete the 2 epochs :wink:

Happy learning!!!