So… I got it working, but the way to do it was very weird. I’ll explain it here in case anyone else with a long-term conda system on MacOS has similar problems.
I’ve been reinstalling fresh conda environments before each attempt, before my last post. I later tried deleting and reinstalling Miniconda itself twice. I finally got the tests working (it’s able to import fastai2) by deleting every hidden file/folder I could find relating to jupyter, conda, or ipython; ontop of a fresh conda reinstall.
nbdev_test_nbs tests pass in /nbdev and /fastcore. 17 notebooks fail in /fastai2 → these seem more ’normal’: the result of CUDA tests on a cpu-only system, or maybe import failures from wrong versions (“
No module named 'wandb’”, “
cannot import name 'PILLOW_VERSION' from 'PIL’”, and “
Torch not compiled with CUDA enabled” for example).
This is with a total clean install of miniconda — my base env doesn’t even have Jupyter on it yet. To get it working (along with a bunch of restarts to be on the safe side) what I did was:
- delete my Miniconda root directory
- delete every hidden folder and file I could find in my Home directory that seemed relevant:
- .jupyter; anything with ‘ipython’, ‘conda’, etc.
I already commented out any conda-related lines in my .zshrc (MacOS equivalent of .bashrc or .bash_profile). Then redownload Miniconda, verify, install, and (since it isn’t configured to write to Zsh) copy its conda-initialization lines from ~/.bash_profile to ~/.zshrc. Checking
pip list and
conda list after restarting Terminal, the base conda env only has a handful of packages.
Then I installed fastai2 to its own env, along with fastcore. I don’t remember if I used packaged or editable, I think I tested both.
nbdev_test_nbs worked, with the new errors mentioned above; I dev-installed nbdev and its own tests passed.
This method deletes all Jupyter customizations and conda environments.
Where I think the issue was:
I still had errors even after deleting and reinstalling conda on my system. I noticed in Jupyter there were old ‘named’ kernels available in the dropdown menu. Kernels for deleted environments. This likely meant ipython/ipykernel had a config file pointing there, and was being used by new Jupyter & etc installations. I only had one for “fastai” and a couple unrelated kernels like Scala … so I don’t know how this tripped up
nbdev_test_nbs into being unable to import fastai2.
What contributed confusion early in this process was not knowing how “developer”/“editable” pip/conda/python installs work. I’m used to the pre-0.7 course-v1 workflow of freely editing files in a fastai folder; and I wasn’t sure if dev-installs went to the base/system python ‘env’ or to the active env (it’s the active conda env). Also, MacOS’s switch to ZShell as its default from Bash means that the
pip install -e .[dev] line doesn’t work → the brackets must be in quotes (double or single):
pip install -e '.[dev]', and I didn’t know the significance of “.[dev]” since
pip install -e . appaeared to run just fine — so I thought “.[dev]” could’ve been some informal programmer shorthand. Not the case.
Hopefully this helps anyone facing a similar problem.
Note: looks like the new PIL (7.0.0) doesn’t work with Torchvision, as of 3 January, → but the PyTorch devs are aware and plan to update (PyTorch and Torchvision) to fix it this week (the PIL & PILLOW_VERSION error). Until then this:
pip install "pillow<7" seems to work.
edit: is fastai2 currently not meant to work on CPU-only systems? After installing a few dependencies (seemingly not taken care of by the environment.yml fastai2 env repo install):
pip install wandb
conda install -c fastai -c pytorch fastai
pip install tensorboard
the only tests that fail, seem to all be CUDA related. Only 6 fail now:
Interesting, since Apple has dropped all CUDA support… I wonder if it’s possible to install a cuda-gpu version of PyTorch on MacOS just for code compatibility. I’ll update here if I try it out.