Developer chat

Trying to discover what is causing the kernel to restart in my conda environment, I executed each one of these lines in the python console:

from fastai import *
from fastai.vision import *
from fastai.collab import *
from fastai.tabular import *
from fastai.text import *
from fastai.docs import *

Every one of them stopped in the following lines:

# code object from '/home/marco/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/vision/__pycache__/transform.cpython-36.pyc'
import 'fastai.vision.transform' # <_frozen_importlib_external.SourceFileLoader object at 0x7fd88eb5f2b0>

Is the ‘fastai.vision.transform’ been imported with collab, tabular, text or docs too?

To maintain the conda packages and not install the pip ones I did:

pip install -e . --no-dependencies

The spacy package is working fine and I can import fastai from the repository folder (until the kernel restarts, as described above).

Currently building a v2.0.13.dev3 of spaCy with the updated regex version. The exact pin was unfortunate, but regex doesn’t semver, making it hard to give a range :(. The dev version should be uploaded within the next half hour or so (CI can take some time).

Once it’s up, you should be able to set your version pin to spacy==2.0.13.dev3 to verify that it all works. I can then publish 2.0.13 properly, so you can set your pin to spacy>=2.0.13,<2.1.0

Edit: Getting test failures with regex==2018.08.29; some of the tests are simply hanging. I guess I can try some earlier versions, but it makes me nervous about rushing out a version on this. I’m worried performance could be much worse on some inputs, for some languages.

3 Likes

Please note a minor change in dev install steps

In addition to having all pip dependencies managed in a single file (setup.py),

To do an editable install, do:

pip install -e .[dev]

It’s almost the same as:

pip install -e .

but the former will also install extra dependencies needed only by developers. Of course, the latter works as before…

Down the road we can define a whole bunch of sub-project dependencies (e.g. just NLP-related deps) and install them with:

pip install -e .[nlp]
1 Like

May be this ?

Wooowww sorry now I get it you guys are trying to build a environment for python 3.7… Sorry for that

conda update regex -c conda-forge

Package Plan

environment location: /home/ubuntu/miniconda3/envs/fastai

added / updated specs:
- regex

The following packages will be downloaded:

package                    |            build
---------------------------|-----------------
semver-2.8.1               |             py_1           7 KB  conda-forge
sputnik-0.9.3              |           py36_0          45 KB  conda-forge
thinc-5.0.8                |           py36_0         1.3 MB
spacy-0.101.0              |           py36_0         5.7 MB
preshed-0.46.4             |           py36_0         223 KB  conda-forge
regex-2018.08.29           |   py36h470a237_0         686 KB  conda-forge
murmurhash-0.26.4          |           py36_0          37 KB  conda-forge
------------------------------------------------------------
                                       Total:         8.0 MB

The following NEW packages will be INSTALLED:

semver:          2.8.1-py_1            conda-forge
sputnik:         0.9.3-py36_0          conda-forge

The following packages will be UPDATED:

ca-certificates: 2018.03.07-0                      --> 2018.8.24-ha4d7672_0      conda-forge
certifi:         2018.8.24-py36_1                  --> 2018.8.24-py36_1001       conda-forge
openssl:         1.0.2p-h14c3975_0                 --> 1.0.2p-h470a237_0         conda-forge
regex:           2017.11.09-py36_0     conda-forge --> 2018.08.29-py36h470a237_0 conda-forge

The following packages will be DOWNGRADED:

murmurhash:      0.28.0-py36hfc679d8_0 conda-forge --> 0.26.4-py36_0             conda-forge
preshed:         1.0.1-py36hfc679d8_0  conda-forge --> 0.46.4-py36_0             conda-forge
spacy:           2.0.12-py36hf8a1672_0 conda-forge --> 0.101.0-py36_0                       
thinc:           6.10.3-py36hf8a1672_3 conda-forge --> 5.0.8-py36_0
1 Like

I started a new thread for install issues - please use that for any future reports. Thank you.

@elmarculino, thank you for your patience. Will you kindly post the updated state of the issues you’re experiencing to the thread I mentioned above. I’m also not sure whether you have this issue in jupyter notebook or lab? does it work ok in the notebook? can you run other non-fastai code? Thank you.

Also see 1, 2 - perhaps any of these is your culprit? but continue in the other thread with your details - since it’ll probably help others in the same boat down the river. thanks.

This might be useful to some of you - just discovered it:

Switching Conda Environments in Jupyter

Other than the normal switching environments with restarts:

source activate env1
jupyter notebook
(Ctrl-C to kill jupyter)
source activate env2
jupyter notebook

You can install nb_conda_kernels, which provides a separate jupyter kernel for each conda environment, along with the appropriate code to handle their setup. This makes switching conda environments as simple as switching jupyter kernel (e.g. from the kernel menu). And you don’t need to worry which environment you started jupyter notebook from - just choose the right environment from the notebook.

source: https://stackoverflow.com/a/47262847/9201239

4 Likes

Thank you, @TheShadow29. I haven’t tried yet installing with miniconda. I have added conda update conda to the docs, which should take care of this situation.

1 Like

If possible please test the new diagnostics function:

git pull
python -c 'import fastai; fastai.show_install(1)'

If possible to test it on cpu-only setup too.

We need it now to help with debugging install issues, and also it will be useful for dealing with functionality bug reports.

Thank you.

Dear Stas,

this is what I get on a CPU-only system:

platform info  : Darwin-17.7.0-x86_64-i386-64bit
python version : 3.6.6
fastai version : 1.0.5.dev0
torch version  : 1.0.0.dev20180921
cuda available: False
cuda version   : None
cudnn available: True
gpu count      : 0
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/XY/Downloads/fastai/fastai/torch_core.py", line 242, in show_install
    gpus = GPUtil.getGPUs()
  File "/Users/XY/anaconda3/lib/python3.6/site-packages/GPUtil/GPUtil.py", line 64, in getGPUs
    p = Popen(["nvidia-smi","--query-gpu=index,uuid,utilization.gpu,memory.total,memory.used,memory.free,driver_version,name,gpu_serial,display_active,display_mode", "--format=csv,noheader,nounits"], stdout=PIPE)
  File "/Users/XY/anaconda3/lib/python3.6/subprocess.py", line 709, in __init__
    restore_signals, start_new_session)
  File "/Users/XY/anaconda3/lib/python3.6/subprocess.py", line 1344, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'nvidia-smi': 'nvidia-smi'

I seems to need the nvidia-smi tool, which it does not find on this CPU-only machine.

I will check it later on my paperspace machine with GPU and post what I get there.

Best regards
Michael

Thanks a lot, @MicPie. I have made some extra tweaks and hopefully now you should get a clean output.

git pull
python -c 'import fastai; fastai.show_install(1)'

Thank you!

I’ve just added a module that will display test docstrings when running tests, so you’ll need to install it:

pip install pytest-pspec

I’ve added a basic end-to-end MNIST vision test that checks >98% accuracy after 1 epoch. It takes about 5 secs on a 1080ti. I think it’s a good idea to have at least one full integration test, although I’m open to using something else if the speed of this one is an issue for too many people. Or maybe there needs to be some easy way for particular people to disable it, if they don’t have a GPU.

1 Like

Dear Stas,

I still get a similar error after the line with “torch gpu count”:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/MMP/Downloads/fastai/fastai/torch_core.py", line 271, in show_install
    gpus = GPUtil.getGPUs()
  File "/Users/MMP/anaconda3/lib/python3.6/site-packages/GPUtil/GPUtil.py", line 64, in getGPUs
    p = Popen(["nvidia-smi","--query-gpu=index,uuid,utilization.gpu,memory.total,memory.used,memory.free,driver_version,name,gpu_serial,display_active,display_mode", "--format=csv,noheader,nounits"], stdout=PIPE)
  File "/Users/MMP/anaconda3/lib/python3.6/subprocess.py", line 709, in __init__
    restore_signals, start_new_session)
  File "/Users/MMP/anaconda3/lib/python3.6/subprocess.py", line 1344, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'nvidia-smi': 'nvidia-smi'

Best regards
Michael

OK, GPUtil is a wrapper for nvidia-smi, and doesn’t handle lack of it gracefully. I removed it. If you can kindly try the third time after git pull - hopefully it’ll work OK now. Thank you for your support, @MicPie.

Do we want this enabled for make test though? This addition puts it into the detailed mode, whereas normally we want it to be compact.

let’s experiment. What’s an easy way to tell torch to ignore my gpu?

Yes, probably need to skip this kind of tests for CPU by default and have an option to override it. Otherwise people won’t run the test suite. I have an old PC at the moment so it’s very slow on CPU:

time py.test tests/test_vision.py
time CUDA_VISIBLE_DEVICES=" "  py.test tests/test_vision.py

w/  GPU: ~30 secs
w/o GPU: ~15 min

No we don’t. Sorry didn’t realize it changes the detail level. Ideally I’d just like it to show the pspec-style names of failing tests. I’ll add that to the todo list to figure that out.

I’d like devs to always run the integration test before pushing a non-trivial change.

I haven’t spent time learning about pytest yet but I’m sure there will be some way we can have different categories of tests.

1 Like

One way, instead of adding addopts = --pspec to setup.cfg, it will need to be added at run time. we could make a new Makefile target test-verbose or vtest, or something similar, which would push this argument in.

@stas most of the time I’m running just one test file. So I’m looking for something where “pytest file.ipynb” shows the full names of tests.

Just pushed a free commits that may be of general interest:

  • If you have a Learner called learn, you can now say doc(learn.fit) instead of needing doc(Learner.fit)
  • I’ve added a mention of doc in the docs’ index.html - it’ll show documentation in a preview window in Jupyter
  • Callbacks now have an order. Default is 0. Recorder is -10. If you want your callback to have a different order, just set its _order attribute
  • When creating an Image, you can now pass an ndarray and it’ll turn it in to a tensor for you
  • Added rand_pad function that does basic padding and random cropping, as used for CIFAR10
1 Like
$ pytest --pspec tests/test_vision.py

I was just trying to find a way not to need to remember the long option, I guess an alias would do.