April 23 2018: Just noticed this on the Pytorch forum:
UPDATE :
I was nervous about getting the peterjc123 system form the internet in case it has also been updated, so I approached this as a manual exercise of copying over what I thought might be needed:
I removed cuda90-1.0-h4c72538_0.json and pytorch-0.3.1-py36_cuda90_cudnn7he774522_2.json from the fastai/conda-meta directory and replaced them with the equivalent cuda80-1.0-h205658b_0.json and pytorch-0.3.0-py36_0.3.0cu80.json from my working base system.
I likewise removed torch and torch-0.3.1.post2-py3.6.egg-info directories from fastai/Lib/site-packages and replaced them with torch and torch-0.3.0b0+591e73e-py3.6.egg-info from my base system.
Now when I fire up ipython and test - things work, and it reports the same information as my base system.
There may be gotchas with this approach, if anyone knows of any please let me know. I will post updates if I need to do any more work to get it functioning.
I have recently updated my fastai environment (under Windows 10) and now I have errors using even basic functionality. Pytorch reports it is now version 0.3.1.post2 and says it no longer supporting my old graphics card, so I want to revert to the earlier version 0.3.0b0+591e73e, which worked, or find some other solution apart from buying a new graphics card - which is not justified or even supported in this PC (I intend to buy something better but am not there yet).
Note that the error may be related to the cuda windows library version, as this also differs in my fastai environment (cudart64_90 vs cudart64_80).
I am able to reproduce the differences between the versions like this:
- In fastai environment:
import torch
print(torch.__version__)
0.3.1.post2
torch.cuda.find_cuda_windows_lib()
<CDLL 'cudart64_90', handle 7ffc97ed0000 at 0x1b2ebaa9860>
torch.cuda.set_device(0)
torch.cuda.get_device_capability(0)
(3, 0)
x = torch.cuda.FloatTensor(1)
D:\Anaconda3\envs\fastai\lib\site-packages\torch\cuda\__init__.py:116: UserWarning:
Found GPU0 GeForce GTX 650 Ti which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
x
0
[torch.cuda.FloatTensor of size 1 (GPU 0)]
y = torch.cuda.FloatTensor(1)
y
0
[torch.cuda.FloatTensor of size 1 (GPU 0)]
x + y
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-33-cd60f97aa77f> in <module>()
----> 1 x + y
D:\Anaconda3\envs\fastai\lib\site-packages\torch\tensor.py in __add__(self, other)
303 # TODO: add tests for operators
304 def __add__(self, other):
--> 305 return self.add(other)
306 __radd__ = __add__
307
RuntimeError: cuda runtime error (48) : no kernel image is available for execution on the device at c:\anaconda2\conda-bld\pytorch_1519501749874\work\torch\lib\thc\generic/THCTensorMathPointwise.cu:301
=======================================================
- On the other hand, in my base environment, which is still back on 0.3.0b0+591e73e (which I believe is a peterjc123 version), this works fine:
print(torch.__version__)
0.3.0b0+591e73e
torch.cuda.find_cuda_windows_lib()
<CDLL 'cudart64_80', handle 7ffc367f0000 at 0x24586fa5e80>
x = torch.cuda.FloatTensor(1)
x
0
[torch.cuda.FloatTensor of size 1 (GPU 0)]
y = torch.cuda.FloatTensor(1)
y
0
[torch.cuda.FloatTensor of size 1 (GPU 0)]
x + y
0
[torch.cuda.FloatTensor of size 1 (GPU 0)]