Trying to discover what is causing the kernel to restart in my conda environment, I executed each one of these lines in the python console:
from fastai import *
from fastai.vision import *
from fastai.collab import *
from fastai.tabular import *
from fastai.text import *
from fastai.docs import *
Every one of them stopped in the following lines:
# code object from '/home/marco/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/vision/__pycache__/transform.cpython-36.pyc'
import 'fastai.vision.transform' # <_frozen_importlib_external.SourceFileLoader object at 0x7fd88eb5f2b0>
Is the ‘fastai.vision.transform’ been imported with collab, tabular, text or docs too?
Currently building a v2.0.13.dev3 of spaCy with the updated regex version. The exact pin was unfortunate, but regex doesn’t semver, making it hard to give a range :(. The dev version should be uploaded within the next half hour or so (CI can take some time).
Once it’s up, you should be able to set your version pin to spacy==2.0.13.dev3 to verify that it all works. I can then publish 2.0.13 properly, so you can set your pin to spacy>=2.0.13,<2.1.0
Edit: Getting test failures with regex==2018.08.29; some of the tests are simply hanging. I guess I can try some earlier versions, but it makes me nervous about rushing out a version on this. I’m worried performance could be much worse on some inputs, for some languages.
I started a new thread for install issues - please use that for any future reports. Thank you.
@elmarculino, thank you for your patience. Will you kindly post the updated state of the issues you’re experiencing to the thread I mentioned above. I’m also not sure whether you have this issue in jupyter notebook or lab? does it work ok in the notebook? can you run other non-fastai code? Thank you.
Also see 1, 2 - perhaps any of these is your culprit? but continue in the other thread with your details - since it’ll probably help others in the same boat down the river. thanks.
You can install nb_conda_kernels, which provides a separate jupyter kernel for each conda environment, along with the appropriate code to handle their setup. This makes switching conda environments as simple as switching jupyter kernel (e.g. from the kernel menu). And you don’t need to worry which environment you started jupyter notebook from - just choose the right environment from the notebook.
Thank you, @TheShadow29. I haven’t tried yet installing with miniconda. I have added conda update conda to the docs, which should take care of this situation.
platform info : Darwin-17.7.0-x86_64-i386-64bit
python version : 3.6.6
fastai version : 1.0.5.dev0
torch version : 1.0.0.dev20180921
cuda available: False
cuda version : None
cudnn available: True
gpu count : 0
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/XY/Downloads/fastai/fastai/torch_core.py", line 242, in show_install
gpus = GPUtil.getGPUs()
File "/Users/XY/anaconda3/lib/python3.6/site-packages/GPUtil/GPUtil.py", line 64, in getGPUs
p = Popen(["nvidia-smi","--query-gpu=index,uuid,utilization.gpu,memory.total,memory.used,memory.free,driver_version,name,gpu_serial,display_active,display_mode", "--format=csv,noheader,nounits"], stdout=PIPE)
File "/Users/XY/anaconda3/lib/python3.6/subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "/Users/XY/anaconda3/lib/python3.6/subprocess.py", line 1344, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'nvidia-smi': 'nvidia-smi'
I seems to need the nvidia-smi tool, which it does not find on this CPU-only machine.
I will check it later on my paperspace machine with GPU and post what I get there.
I’ve just added a module that will display test docstrings when running tests, so you’ll need to install it:
pip install pytest-pspec
I’ve added a basic end-to-end MNIST vision test that checks >98% accuracy after 1 epoch. It takes about 5 secs on a 1080ti. I think it’s a good idea to have at least one full integration test, although I’m open to using something else if the speed of this one is an issue for too many people. Or maybe there needs to be some easy way for particular people to disable it, if they don’t have a GPU.
I still get a similar error after the line with “torch gpu count”:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/MMP/Downloads/fastai/fastai/torch_core.py", line 271, in show_install
gpus = GPUtil.getGPUs()
File "/Users/MMP/anaconda3/lib/python3.6/site-packages/GPUtil/GPUtil.py", line 64, in getGPUs
p = Popen(["nvidia-smi","--query-gpu=index,uuid,utilization.gpu,memory.total,memory.used,memory.free,driver_version,name,gpu_serial,display_active,display_mode", "--format=csv,noheader,nounits"], stdout=PIPE)
File "/Users/MMP/anaconda3/lib/python3.6/subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "/Users/MMP/anaconda3/lib/python3.6/subprocess.py", line 1344, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'nvidia-smi': 'nvidia-smi'
OK, GPUtil is a wrapper for nvidia-smi, and doesn’t handle lack of it gracefully. I removed it. If you can kindly try the third time after git pull - hopefully it’ll work OK now. Thank you for your support, @MicPie.
Do we want this enabled for make test though? This addition puts it into the detailed mode, whereas normally we want it to be compact.
let’s experiment. What’s an easy way to tell torch to ignore my gpu?
Yes, probably need to skip this kind of tests for CPU by default and have an option to override it. Otherwise people won’t run the test suite. I have an old PC at the moment so it’s very slow on CPU:
time py.test tests/test_vision.py
time CUDA_VISIBLE_DEVICES=" " py.test tests/test_vision.py
w/ GPU: ~30 secs
w/o GPU: ~15 min
No we don’t. Sorry didn’t realize it changes the detail level. Ideally I’d just like it to show the pspec-style names of failing tests. I’ll add that to the todo list to figure that out.
I’d like devs to always run the integration test before pushing a non-trivial change.
I haven’t spent time learning about pytest yet but I’m sure there will be some way we can have different categories of tests.
One way, instead of adding addopts = --pspec to setup.cfg, it will need to be added at run time. we could make a new Makefile target test-verbose or vtest, or something similar, which would push this argument in.