Hmm, yes, it’s a big mess - pytorch has just introduced pytorch-nightly-cpu, which is not pytorch-nightly, so conda dependencies are broken. Thank you for reporting this issue, @tnisonoff - somehow I didn’t think that it’d break.
Any luck with installing just the dependencies:
conda install fastai -c fastai --only-deps
and then installing fastai from source?
While I’m thinking about a possible solution here, perhaps simply use pip install instructions?
I’m able to reproduce this. I will post back once I have a solution.
An obvious solution that would work is to create fastai-cpu conda package that depends on pytorch-nightly-cpu. But I’m trying to think of something better.
I needed to do some googling to find the cpu location of torch_nightly. I can submit a PR to update the README with pip instructions for cpu mode if you’d like! The current README instructions won’t lead to the correct torch_nightly afaict.
Not for the package manager - that would be one more package to manage, mostly duplicated dependencies, et al.
There could have been a much simpler solution: removing dependency on torch - which is fine since we already have separate instructions to installing it. Except we can do it only in pip packages, conda will not let you cheat
What we need is for conda to support a dependency with several alternative names, but I doubt this is going to happen any time soon, since what’s happening with pytorch right now is so rare.
Hmm, this is odd that it worked, since if I go to Start Locally | PyTorch and select Preview/Mac/No-CUDA it says “# Preview Build Not Yet Available on MacOS.”
Perhaps it’s not stable that they don’t advertise it?
currently the cheating package is in the test label (hidden from -c fastai users), I tested it to install just fine. If you confirm that it works for you, I will release it to all and update the docs.
and if somebody wants to test the the non-cpu pytorch, it’d be:
(py37) [tyler-ekseks2] ~/sandbox/asana: conda install -c fastai/label/test fastai
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- fastai
- dataclasses
- fastai
- fastprogress[version='>=0.1.9']
Current channels:
.....
We now have 8 different configurations of fastai-v1 builds tested at azure CI: Mac|Linux py3.6|37 conda|pip:
Unfortunately all cpu-only, but that’s a good start.
If you know of a free CI service integrated with github with free access to a GPU that would be great. I know we could use AWS/CGE, but that’s not free.
I found CircliCI, which provides limited builds for free, but I’m not sure whether GPU is included in the free package.
The error was similar to this one that I copied from the SO post (I no longer have the exact one I had on azure):
>>> import matplotlib.pyplot as plt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "//anaconda/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg/matplotlib/pyplot.py", line 98, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "//anaconda/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg/matplotlib/backends/__init__.py", line 28, in pylab_setup
globals(),locals(),[backend_name],0)
File "//anaconda/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg/matplotlib/backends/backend_macosx.py", line 21, in <module>
from matplotlib.backends import _macosx
**RuntimeError**: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends
@stas Problem: No longer have access to GPU… cuda available is false…
Report:
platform info : Linux-4.15.0-36-generic-x86_64-with-debian-stretch-sid
distro info : Ubuntu 16.04 Xenial Xerus
python version : 3.6.5
fastai version : 1.0.6.dev0
torch version : 1.0.0.dev20181008
nvidia driver : 396.37
cuda available : False
cuda version : 9.2.148
cudnn version : 7104
cudnn available: True
torch gpu count: 0
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
±----------------------------------------------------------------------------+
a brief summary of the problem:
NO longer have GPU access (was working fine with previous fastai)
torch.cuda.is_available() is FALSE (but torch.backends.cudnn.enabled is TRUE.)